Test Report: Hyper-V_Windows 20354

                    
                      f4981b37cef8a8edf9576fbca56a900d4b787caa:2025-02-03:38193
                    
                

Test fail (9/213)

x
+
TestErrorSpam/setup (181.8s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-903900 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-903900 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 --driver=hyperv: (3m1.7946141s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube VM"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-903900] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
- KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
- MINIKUBE_LOCATION=20354
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-903900" primary control-plane node in "nospam-903900" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-903900" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (181.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 service --namespace=default --https --url hello-node
functional_test.go:1526: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-266500 service --namespace=default --https --url hello-node: exit status 1 (15.0556225s)
functional_test.go:1528: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-266500 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 service hello-node --url --format={{.IP}}
functional_test.go:1557: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-266500 service hello-node --url --format={{.IP}}: exit status 1 (15.0116956s)
functional_test.go:1559: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-266500 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1565: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 service hello-node --url
functional_test.go:1576: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-266500 service hello-node --url: exit status 1 (15.009378s)
functional_test.go:1578: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-266500 service hello-node --url": exit status 1
functional_test.go:1582: found endpoint for hello-node: 
functional_test.go:1590: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (65.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-429000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-429000 -- exec busybox-58667487b6-hcrnz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-429000 -- exec busybox-58667487b6-hcrnz -- sh -c "ping -c 1 172.25.0.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-429000 -- exec busybox-58667487b6-hcrnz -- sh -c "ping -c 1 172.25.0.1": exit status 1 (10.4490548s)

                                                
                                                
-- stdout --
	PING 172.25.0.1 (172.25.0.1): 56 data bytes
	
	--- 172.25.0.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.25.0.1) from pod (busybox-58667487b6-hcrnz): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-429000 -- exec busybox-58667487b6-hjbfz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-429000 -- exec busybox-58667487b6-hjbfz -- sh -c "ping -c 1 172.25.0.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-429000 -- exec busybox-58667487b6-hjbfz -- sh -c "ping -c 1 172.25.0.1": exit status 1 (10.4514118s)

                                                
                                                
-- stdout --
	PING 172.25.0.1 (172.25.0.1): 56 data bytes
	
	--- 172.25.0.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.25.0.1) from pod (busybox-58667487b6-hjbfz): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-429000 -- exec busybox-58667487b6-k7s2q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-429000 -- exec busybox-58667487b6-k7s2q -- sh -c "ping -c 1 172.25.0.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-429000 -- exec busybox-58667487b6-k7s2q -- sh -c "ping -c 1 172.25.0.1": exit status 1 (10.4280919s)

                                                
                                                
-- stdout --
	PING 172.25.0.1 (172.25.0.1): 56 data bytes
	
	--- 172.25.0.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.25.0.1) from pod (busybox-58667487b6-k7s2q): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-429000 -n ha-429000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-429000 -n ha-429000: (11.2059331s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 logs -n 25: (8.0976727s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-266500                    | functional-266500 | minikube5\jenkins | v1.35.0 | 03 Feb 25 10:58 UTC | 03 Feb 25 10:58 UTC |
	|         | image ls --format table              |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| image   | functional-266500 image build -t     | functional-266500 | minikube5\jenkins | v1.35.0 | 03 Feb 25 10:58 UTC | 03 Feb 25 10:58 UTC |
	|         | localhost/my-image:functional-266500 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-266500 image ls           | functional-266500 | minikube5\jenkins | v1.35.0 | 03 Feb 25 10:58 UTC | 03 Feb 25 10:59 UTC |
	| delete  | -p functional-266500                 | functional-266500 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:01 UTC | 03 Feb 25 11:02 UTC |
	| start   | -p ha-429000 --wait=true             | ha-429000         | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:02 UTC | 03 Feb 25 11:13 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-429000 -- apply -f             | ha-429000         | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:13 UTC | 03 Feb 25 11:13 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-429000 -- rollout status       | ha-429000         | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:13 UTC | 03 Feb 25 11:13 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-429000 -- get pods -o          | ha-429000         | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:13 UTC | 03 Feb 25 11:13 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-429000 -- get pods -o          | ha-429000         | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:13 UTC | 03 Feb 25 11:13 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-429000 -- exec                 | ha-429000         | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:13 UTC | 03 Feb 25 11:13 UTC |
	|         | busybox-58667487b6-hcrnz --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-429000 -- exec                 | ha-429000         | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:13 UTC | 03 Feb 25 11:13 UTC |
	|         | busybox-58667487b6-hjbfz --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-429000 -- exec                 | ha-429000         | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:13 UTC | 03 Feb 25 11:13 UTC |
	|         | busybox-58667487b6-k7s2q --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-429000 -- exec                 | ha-429000         | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:13 UTC | 03 Feb 25 11:13 UTC |
	|         | busybox-58667487b6-hcrnz --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-429000 -- exec                 | ha-429000         | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:13 UTC | 03 Feb 25 11:13 UTC |
	|         | busybox-58667487b6-hjbfz --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-429000 -- exec                 | ha-429000         | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:13 UTC | 03 Feb 25 11:13 UTC |
	|         | busybox-58667487b6-k7s2q --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-429000 -- exec                 | ha-429000         | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:13 UTC | 03 Feb 25 11:13 UTC |
	|         | busybox-58667487b6-hcrnz -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-429000 -- exec                 | ha-429000         | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:13 UTC | 03 Feb 25 11:13 UTC |
	|         | busybox-58667487b6-hjbfz -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-429000 -- exec                 | ha-429000         | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:13 UTC | 03 Feb 25 11:13 UTC |
	|         | busybox-58667487b6-k7s2q -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-429000 -- get pods -o          | ha-429000         | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:13 UTC | 03 Feb 25 11:13 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-429000 -- exec                 | ha-429000         | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:13 UTC | 03 Feb 25 11:13 UTC |
	|         | busybox-58667487b6-hcrnz             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-429000 -- exec                 | ha-429000         | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:13 UTC |                     |
	|         | busybox-58667487b6-hcrnz -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.0.1              |                   |                   |         |                     |                     |
	| kubectl | -p ha-429000 -- exec                 | ha-429000         | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:14 UTC | 03 Feb 25 11:14 UTC |
	|         | busybox-58667487b6-hjbfz             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-429000 -- exec                 | ha-429000         | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:14 UTC |                     |
	|         | busybox-58667487b6-hjbfz -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.0.1              |                   |                   |         |                     |                     |
	| kubectl | -p ha-429000 -- exec                 | ha-429000         | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:14 UTC | 03 Feb 25 11:14 UTC |
	|         | busybox-58667487b6-k7s2q             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-429000 -- exec                 | ha-429000         | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:14 UTC |                     |
	|         | busybox-58667487b6-k7s2q -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.0.1              |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/03 11:02:36
	Running on machine: minikube5
	Binary: Built with gc go1.23.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 11:02:36.636040   12544 out.go:345] Setting OutFile to fd 1628 ...
	I0203 11:02:36.695209   12544 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:02:36.695209   12544 out.go:358] Setting ErrFile to fd 392...
	I0203 11:02:36.695209   12544 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:02:36.715129   12544 out.go:352] Setting JSON to false
	I0203 11:02:36.717962   12544 start.go:129] hostinfo: {"hostname":"minikube5","uptime":165158,"bootTime":1738415398,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5371 Build 19045.5371","kernelVersion":"10.0.19045.5371 Build 19045.5371","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0203 11:02:36.718059   12544 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0203 11:02:36.724491   12544 out.go:177] * [ha-429000] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	I0203 11:02:36.728915   12544 notify.go:220] Checking for updates...
	I0203 11:02:36.730973   12544 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 11:02:36.733322   12544 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 11:02:36.735558   12544 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0203 11:02:36.737932   12544 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 11:02:36.740356   12544 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 11:02:36.743141   12544 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 11:02:41.619465   12544 out.go:177] * Using the hyperv driver based on user configuration
	I0203 11:02:41.625437   12544 start.go:297] selected driver: hyperv
	I0203 11:02:41.625437   12544 start.go:901] validating driver "hyperv" against <nil>
	I0203 11:02:41.625437   12544 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 11:02:41.671256   12544 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0203 11:02:41.672472   12544 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 11:02:41.672472   12544 cni.go:84] Creating CNI manager for ""
	I0203 11:02:41.672472   12544 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0203 11:02:41.672472   12544 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0203 11:02:41.673083   12544 start.go:340] cluster config:
	{Name:ha-429000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-429000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0203 11:02:41.673083   12544 iso.go:125] acquiring lock: {Name:mkae681ee414e9275e9685c6bbf5080b17ead976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:02:41.677983   12544 out.go:177] * Starting "ha-429000" primary control-plane node in "ha-429000" cluster
	I0203 11:02:41.686815   12544 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 11:02:41.686815   12544 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0203 11:02:41.686815   12544 cache.go:56] Caching tarball of preloaded images
	I0203 11:02:41.687585   12544 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 11:02:41.687585   12544 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0203 11:02:41.688971   12544 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\config.json ...
	I0203 11:02:41.689732   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\config.json: {Name:mk7825012338486fc7b9918dde319dc426284704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:02:41.691089   12544 start.go:360] acquireMachinesLock for ha-429000: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 11:02:41.691089   12544 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-429000"
	I0203 11:02:41.691089   12544 start.go:93] Provisioning new machine with config: &{Name:ha-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-429000 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 11:02:41.691745   12544 start.go:125] createHost starting for "" (driver="hyperv")
	I0203 11:02:41.695079   12544 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0203 11:02:41.695844   12544 start.go:159] libmachine.API.Create for "ha-429000" (driver="hyperv")
	I0203 11:02:41.695916   12544 client.go:168] LocalClient.Create starting
	I0203 11:02:41.696369   12544 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0203 11:02:41.696610   12544 main.go:141] libmachine: Decoding PEM data...
	I0203 11:02:41.696647   12544 main.go:141] libmachine: Parsing certificate...
	I0203 11:02:41.696820   12544 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0203 11:02:41.696886   12544 main.go:141] libmachine: Decoding PEM data...
	I0203 11:02:41.696886   12544 main.go:141] libmachine: Parsing certificate...
	I0203 11:02:41.696886   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0203 11:02:43.589007   12544 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0203 11:02:43.589106   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:02:43.589106   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0203 11:02:45.192205   12544 main.go:141] libmachine: [stdout =====>] : False
	
	I0203 11:02:45.192407   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:02:45.192407   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0203 11:02:46.631999   12544 main.go:141] libmachine: [stdout =====>] : True
	
	I0203 11:02:46.631999   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:02:46.632794   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0203 11:02:49.913390   12544 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0203 11:02:49.913390   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:02:49.914706   12544 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0203 11:02:50.356920   12544 main.go:141] libmachine: Creating SSH key...
	I0203 11:02:50.472928   12544 main.go:141] libmachine: Creating VM...
	I0203 11:02:50.472928   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0203 11:02:53.034927   12544 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0203 11:02:53.035299   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:02:53.035395   12544 main.go:141] libmachine: Using switch "Default Switch"
	I0203 11:02:53.035395   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0203 11:02:54.640401   12544 main.go:141] libmachine: [stdout =====>] : True
	
	I0203 11:02:54.640810   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:02:54.640810   12544 main.go:141] libmachine: Creating VHD
	I0203 11:02:54.640929   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0203 11:02:58.250878   12544 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : ABEDF975-BA03-4A02-84F3-295B7D025EC3
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0203 11:02:58.250878   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:02:58.250878   12544 main.go:141] libmachine: Writing magic tar header
	I0203 11:02:58.250878   12544 main.go:141] libmachine: Writing SSH key tar header
	I0203 11:02:58.263124   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0203 11:03:01.245829   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:03:01.246561   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:01.246561   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\disk.vhd' -SizeBytes 20000MB
	I0203 11:03:03.608156   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:03:03.608156   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:03.609021   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-429000 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0203 11:03:06.977987   12544 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-429000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0203 11:03:06.977987   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:06.978504   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-429000 -DynamicMemoryEnabled $false
	I0203 11:03:09.103029   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:03:09.103029   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:09.103460   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-429000 -Count 2
	I0203 11:03:11.099198   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:03:11.099198   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:11.099198   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-429000 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\boot2docker.iso'
	I0203 11:03:13.447575   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:03:13.447575   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:13.447878   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-429000 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\disk.vhd'
	I0203 11:03:15.897161   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:03:15.897161   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:15.897161   12544 main.go:141] libmachine: Starting VM...
	I0203 11:03:15.897256   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-429000
	I0203 11:03:18.768569   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:03:18.768762   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:18.768762   12544 main.go:141] libmachine: Waiting for host to start...
	I0203 11:03:18.768762   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:03:20.829320   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:03:20.829320   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:20.829320   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:03:23.138898   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:03:23.139461   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:24.139579   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:03:26.113236   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:03:26.113236   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:26.114222   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:03:28.422252   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:03:28.422252   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:29.423373   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:03:31.442167   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:03:31.442659   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:31.442659   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:03:33.751203   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:03:33.751203   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:34.753082   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:03:36.769493   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:03:36.769493   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:36.769577   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:03:39.086225   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:03:39.086225   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:40.088027   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:03:42.125238   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:03:42.125277   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:42.125277   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:03:44.533227   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:03:44.533227   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:44.533660   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:03:46.545049   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:03:46.545049   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:46.545049   12544 machine.go:93] provisionDockerMachine start ...
	I0203 11:03:46.545049   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:03:48.529291   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:03:48.529291   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:48.529291   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:03:50.861050   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:03:50.861050   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:50.866060   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:03:50.881148   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.47 22 <nil> <nil>}
	I0203 11:03:50.881148   12544 main.go:141] libmachine: About to run SSH command:
	hostname
	I0203 11:03:51.016450   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0203 11:03:51.016544   12544 buildroot.go:166] provisioning hostname "ha-429000"
	I0203 11:03:51.016544   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:03:52.990749   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:03:52.990749   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:52.991485   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:03:55.345005   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:03:55.345005   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:55.349936   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:03:55.350347   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.47 22 <nil> <nil>}
	I0203 11:03:55.350347   12544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-429000 && echo "ha-429000" | sudo tee /etc/hostname
	I0203 11:03:55.500160   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-429000
	
	I0203 11:03:55.500297   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:03:57.457872   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:03:57.457872   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:57.457872   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:03:59.849315   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:03:59.849947   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:59.854073   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:03:59.854708   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.47 22 <nil> <nil>}
	I0203 11:03:59.854708   12544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-429000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-429000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-429000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 11:03:59.993140   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 11:03:59.993260   12544 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0203 11:03:59.993260   12544 buildroot.go:174] setting up certificates
	I0203 11:03:59.993260   12544 provision.go:84] configureAuth start
	I0203 11:03:59.993371   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:01.934604   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:01.934604   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:01.935504   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:04.287574   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:04.287647   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:04.287647   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:06.305410   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:06.305410   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:06.306005   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:08.654040   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:08.654040   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:08.654040   12544 provision.go:143] copyHostCerts
	I0203 11:04:08.654946   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0203 11:04:08.654946   12544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0203 11:04:08.654946   12544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0203 11:04:08.655709   12544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0203 11:04:08.656319   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0203 11:04:08.656917   12544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0203 11:04:08.656917   12544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0203 11:04:08.656917   12544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0203 11:04:08.659040   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0203 11:04:08.659040   12544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0203 11:04:08.659040   12544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0203 11:04:08.659654   12544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0203 11:04:08.661158   12544 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-429000 san=[127.0.0.1 172.25.12.47 ha-429000 localhost minikube]
	I0203 11:04:08.764668   12544 provision.go:177] copyRemoteCerts
	I0203 11:04:08.772662   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 11:04:08.772662   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:10.688102   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:10.688102   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:10.689114   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:13.041812   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:13.041812   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:13.042587   12544 sshutil.go:53] new ssh client: &{IP:172.25.12.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\id_rsa Username:docker}
	I0203 11:04:13.143562   12544 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3708495s)
	I0203 11:04:13.143562   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0203 11:04:13.143562   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0203 11:04:13.188264   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0203 11:04:13.188943   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0203 11:04:13.232749   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0203 11:04:13.232749   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0203 11:04:13.277256   12544 provision.go:87] duration metric: took 13.2837973s to configureAuth
	I0203 11:04:13.277290   12544 buildroot.go:189] setting minikube options for container-runtime
	I0203 11:04:13.277718   12544 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:04:13.277718   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:15.259940   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:15.259940   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:15.260592   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:17.580224   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:17.580224   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:17.585328   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:04:17.585328   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.47 22 <nil> <nil>}
	I0203 11:04:17.585328   12544 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 11:04:17.707918   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0203 11:04:17.707979   12544 buildroot.go:70] root file system type: tmpfs
	I0203 11:04:17.708237   12544 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 11:04:17.708318   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:19.645853   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:19.646432   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:19.646534   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:21.963841   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:21.963841   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:21.970240   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:04:21.970835   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.47 22 <nil> <nil>}
	I0203 11:04:21.970835   12544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 11:04:22.128681   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 11:04:22.128681   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:24.068711   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:24.068711   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:24.069400   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:26.413108   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:26.413345   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:26.418316   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:04:26.418972   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.47 22 <nil> <nil>}
	I0203 11:04:26.418972   12544 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 11:04:28.623313   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0203 11:04:28.623313   12544 machine.go:96] duration metric: took 42.0777802s to provisionDockerMachine
	I0203 11:04:28.623313   12544 client.go:171] duration metric: took 1m46.9261678s to LocalClient.Create
	I0203 11:04:28.623313   12544 start.go:167] duration metric: took 1m46.9262757s to libmachine.API.Create "ha-429000"
	I0203 11:04:28.623313   12544 start.go:293] postStartSetup for "ha-429000" (driver="hyperv")
	I0203 11:04:28.623313   12544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 11:04:28.632777   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 11:04:28.632777   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:30.579483   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:30.579483   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:30.579893   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:32.885239   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:32.885239   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:32.885798   12544 sshutil.go:53] new ssh client: &{IP:172.25.12.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\id_rsa Username:docker}
	I0203 11:04:32.999510   12544 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3666832s)
	I0203 11:04:33.007791   12544 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 11:04:33.014801   12544 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 11:04:33.014801   12544 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0203 11:04:33.015397   12544 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0203 11:04:33.016087   12544 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> 54522.pem in /etc/ssl/certs
	I0203 11:04:33.016087   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /etc/ssl/certs/54522.pem
	I0203 11:04:33.023748   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 11:04:33.042114   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /etc/ssl/certs/54522.pem (1708 bytes)
	I0203 11:04:33.086815   12544 start.go:296] duration metric: took 4.4634505s for postStartSetup
	I0203 11:04:33.090670   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:35.051345   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:35.051638   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:35.051638   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:37.420889   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:37.421337   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:37.421374   12544 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\config.json ...
	I0203 11:04:37.423604   12544 start.go:128] duration metric: took 1m55.7305288s to createHost
	I0203 11:04:37.423658   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:39.381519   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:39.381519   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:39.381936   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:41.694058   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:41.694058   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:41.698414   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:04:41.699075   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.47 22 <nil> <nil>}
	I0203 11:04:41.699075   12544 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0203 11:04:41.830241   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738580681.840494996
	
	I0203 11:04:41.830349   12544 fix.go:216] guest clock: 1738580681.840494996
	I0203 11:04:41.830349   12544 fix.go:229] Guest: 2025-02-03 11:04:41.840494996 +0000 UTC Remote: 2025-02-03 11:04:37.4236582 +0000 UTC m=+120.886945701 (delta=4.416836796s)
	I0203 11:04:41.830423   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:43.772530   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:43.772530   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:43.772729   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:46.139036   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:46.139085   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:46.142822   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:04:46.143263   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.47 22 <nil> <nil>}
	I0203 11:04:46.143263   12544 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1738580681
	I0203 11:04:46.288894   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb  3 11:04:41 UTC 2025
	
	I0203 11:04:46.288894   12544 fix.go:236] clock set: Mon Feb  3 11:04:41 UTC 2025
	 (err=<nil>)
	I0203 11:04:46.288894   12544 start.go:83] releasing machines lock for "ha-429000", held for 2m4.5963721s
	I0203 11:04:46.288894   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:48.238629   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:48.238629   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:48.238629   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:50.547260   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:50.547260   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:50.550750   12544 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0203 11:04:50.550830   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:50.557818   12544 ssh_runner.go:195] Run: cat /version.json
	I0203 11:04:50.557887   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:52.540421   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:52.540421   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:52.540421   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:52.543870   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:52.543870   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:52.543870   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:54.999994   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:54.999994   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:55.001397   12544 sshutil.go:53] new ssh client: &{IP:172.25.12.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\id_rsa Username:docker}
	I0203 11:04:55.021292   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:55.021292   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:55.022172   12544 sshutil.go:53] new ssh client: &{IP:172.25.12.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\id_rsa Username:docker}
	I0203 11:04:55.091837   12544 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.5410349s)
	W0203 11:04:55.092855   12544 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0203 11:04:55.116367   12544 ssh_runner.go:235] Completed: cat /version.json: (4.5584971s)
	I0203 11:04:55.128751   12544 ssh_runner.go:195] Run: systemctl --version
	I0203 11:04:55.149747   12544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0203 11:04:55.158387   12544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 11:04:55.170328   12544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 11:04:55.198575   12544 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0203 11:04:55.198575   12544 start.go:495] detecting cgroup driver to use...
	I0203 11:04:55.198647   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0203 11:04:55.225287   12544 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0203 11:04:55.225313   12544 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0203 11:04:55.244971   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0203 11:04:55.274205   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0203 11:04:55.297592   12544 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 11:04:55.305297   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0203 11:04:55.333341   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 11:04:55.362607   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 11:04:55.392266   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 11:04:55.422264   12544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 11:04:55.452037   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 11:04:55.480368   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0203 11:04:55.507459   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0203 11:04:55.534662   12544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 11:04:55.552956   12544 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 11:04:55.560291   12544 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0203 11:04:55.590148   12544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 11:04:55.617307   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:04:55.817223   12544 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 11:04:55.849497   12544 start.go:495] detecting cgroup driver to use...
	I0203 11:04:55.857296   12544 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 11:04:55.888999   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 11:04:55.921070   12544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 11:04:55.952902   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 11:04:55.984649   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 11:04:56.015146   12544 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0203 11:04:56.073733   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 11:04:56.097514   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 11:04:56.143949   12544 ssh_runner.go:195] Run: which cri-dockerd
	I0203 11:04:56.159232   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0203 11:04:56.176107   12544 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0203 11:04:56.216555   12544 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 11:04:56.420343   12544 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 11:04:56.612144   12544 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 11:04:56.612353   12544 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0203 11:04:56.653569   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:04:56.837834   12544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 11:04:59.416917   12544 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5790533s)
	I0203 11:04:59.425089   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0203 11:04:59.456518   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 11:04:59.491802   12544 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0203 11:04:59.672346   12544 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 11:04:59.866811   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:05:00.052443   12544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0203 11:05:00.090656   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 11:05:00.121575   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:05:00.314084   12544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0203 11:05:00.417377   12544 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0203 11:05:00.426677   12544 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0203 11:05:00.434854   12544 start.go:563] Will wait 60s for crictl version
	I0203 11:05:00.443568   12544 ssh_runner.go:195] Run: which crictl
	I0203 11:05:00.456358   12544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 11:05:00.509275   12544 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0203 11:05:00.517275   12544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 11:05:00.557081   12544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 11:05:00.592070   12544 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0203 11:05:00.592070   12544 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0203 11:05:00.596154   12544 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0203 11:05:00.596154   12544 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0203 11:05:00.596154   12544 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0203 11:05:00.596154   12544 ip.go:211] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:37:32:ac Flags:up|broadcast|multicast|running}
	I0203 11:05:00.598907   12544 ip.go:214] interface addr: fe80::c77d:5c4b:3bd9:9577/64
	I0203 11:05:00.598907   12544 ip.go:214] interface addr: 172.25.0.1/20
	I0203 11:05:00.607164   12544 ssh_runner.go:195] Run: grep 172.25.0.1	host.minikube.internal$ /etc/hosts
	I0203 11:05:00.613461   12544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:05:00.647506   12544 kubeadm.go:883] updating cluster {Name:ha-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-429000 Namespace:default APIServerHAVIP
:172.25.15.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.12.47 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0203 11:05:00.648512   12544 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 11:05:00.655164   12544 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 11:05:00.679965   12544 docker.go:689] Got preloaded images: 
	I0203 11:05:00.680022   12544 docker.go:695] registry.k8s.io/kube-apiserver:v1.32.1 wasn't preloaded
	I0203 11:05:00.689486   12544 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0203 11:05:00.715666   12544 ssh_runner.go:195] Run: which lz4
	I0203 11:05:00.722040   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0203 11:05:00.730243   12544 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0203 11:05:00.735797   12544 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0203 11:05:00.735797   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (349810983 bytes)
	I0203 11:05:02.080569   12544 docker.go:653] duration metric: took 1.3581868s to copy over tarball
	I0203 11:05:02.090447   12544 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0203 11:05:10.804136   12544 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.7135898s)
	I0203 11:05:10.804136   12544 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0203 11:05:10.864010   12544 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0203 11:05:10.881323   12544 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0203 11:05:10.923405   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:05:11.118720   12544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 11:05:14.486504   12544 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3677454s)
	I0203 11:05:14.494726   12544 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 11:05:14.522484   12544 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0203 11:05:14.522484   12544 cache_images.go:84] Images are preloaded, skipping loading
	I0203 11:05:14.522484   12544 kubeadm.go:934] updating node { 172.25.12.47 8443 v1.32.1 docker true true} ...
	I0203 11:05:14.522484   12544 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-429000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.12.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-429000 Namespace:default APIServerHAVIP:172.25.15.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0203 11:05:14.529951   12544 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0203 11:05:14.595314   12544 cni.go:84] Creating CNI manager for ""
	I0203 11:05:14.595314   12544 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0203 11:05:14.595314   12544 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0203 11:05:14.595314   12544 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.12.47 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-429000 NodeName:ha-429000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.12.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.12.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0203 11:05:14.595554   12544 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.12.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-429000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.25.12.47"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.12.47"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 11:05:14.595642   12544 kube-vip.go:115] generating kube-vip config ...
	I0203 11:05:14.603294   12544 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0203 11:05:14.632170   12544 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0203 11:05:14.632281   12544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.15.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0203 11:05:14.640644   12544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0203 11:05:14.660264   12544 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 11:05:14.667899   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0203 11:05:14.684838   12544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0203 11:05:14.714339   12544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 11:05:14.743057   12544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0203 11:05:14.771632   12544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0203 11:05:14.807164   12544 ssh_runner.go:195] Run: grep 172.25.15.254	control-plane.minikube.internal$ /etc/hosts
	I0203 11:05:14.812898   12544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.15.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:05:14.841502   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:05:15.023078   12544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:05:15.053418   12544 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000 for IP: 172.25.12.47
	I0203 11:05:15.053418   12544 certs.go:194] generating shared ca certs ...
	I0203 11:05:15.053418   12544 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:05:15.054290   12544 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0203 11:05:15.054578   12544 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0203 11:05:15.054780   12544 certs.go:256] generating profile certs ...
	I0203 11:05:15.054780   12544 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\client.key
	I0203 11:05:15.054780   12544 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\client.crt with IP's: []
	I0203 11:05:15.123746   12544 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\client.crt ...
	I0203 11:05:15.123746   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\client.crt: {Name:mk21594987226891b0c4f972f870b155c5d864cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:05:15.125805   12544 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\client.key ...
	I0203 11:05:15.125805   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\client.key: {Name:mkcf578e3dae88b14a8a464a3a8699cfe02a0a64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:05:15.126221   12544 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.8e80910f
	I0203 11:05:15.126221   12544 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.8e80910f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.12.47 172.25.15.254]
	I0203 11:05:15.287451   12544 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.8e80910f ...
	I0203 11:05:15.287451   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.8e80910f: {Name:mk54f3556c0c51c77a0cf6c7587764da5183a0ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:05:15.288610   12544 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.8e80910f ...
	I0203 11:05:15.288610   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.8e80910f: {Name:mkf3715d2b09c66d1e874f0449dfd4c304fef4f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:05:15.289817   12544 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.8e80910f -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt
	I0203 11:05:15.304355   12544 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.8e80910f -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key
	I0203 11:05:15.305428   12544 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.key
	I0203 11:05:15.305566   12544 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.crt with IP's: []
	I0203 11:05:15.865765   12544 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.crt ...
	I0203 11:05:15.865765   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.crt: {Name:mk7b154d21f2248eaa830b2d9ad69b94e0288b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:05:15.866937   12544 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.key ...
	I0203 11:05:15.866937   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.key: {Name:mke4e4b4019cc65c959d9f37f62d35a296df9db8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:05:15.868184   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0203 11:05:15.869029   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0203 11:05:15.869029   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0203 11:05:15.869029   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0203 11:05:15.869029   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0203 11:05:15.869565   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0203 11:05:15.869605   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0203 11:05:15.882424   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0203 11:05:15.882680   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem (1338 bytes)
	W0203 11:05:15.883337   12544 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452_empty.pem, impossibly tiny 0 bytes
	I0203 11:05:15.883538   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0203 11:05:15.883538   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0203 11:05:15.883538   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0203 11:05:15.884084   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0203 11:05:15.884576   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem (1708 bytes)
	I0203 11:05:15.884783   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /usr/share/ca-certificates/54522.pem
	I0203 11:05:15.884988   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:05:15.885082   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem -> /usr/share/ca-certificates/5452.pem
	I0203 11:05:15.886281   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 11:05:15.931752   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0203 11:05:15.975677   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 11:05:16.023252   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0203 11:05:16.069105   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0203 11:05:16.110991   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0203 11:05:16.149349   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 11:05:16.200916   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0203 11:05:16.246911   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /usr/share/ca-certificates/54522.pem (1708 bytes)
	I0203 11:05:16.293587   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 11:05:16.337643   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem --> /usr/share/ca-certificates/5452.pem (1338 bytes)
	I0203 11:05:16.380754   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 11:05:16.418302   12544 ssh_runner.go:195] Run: openssl version
	I0203 11:05:16.436215   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54522.pem && ln -fs /usr/share/ca-certificates/54522.pem /etc/ssl/certs/54522.pem"
	I0203 11:05:16.464712   12544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54522.pem
	I0203 11:05:16.472495   12544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:45 /usr/share/ca-certificates/54522.pem
	I0203 11:05:16.480946   12544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54522.pem
	I0203 11:05:16.497926   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/54522.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 11:05:16.526276   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 11:05:16.552913   12544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:05:16.559994   12544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:05:16.568720   12544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:05:16.585055   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 11:05:16.614247   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5452.pem && ln -fs /usr/share/ca-certificates/5452.pem /etc/ssl/certs/5452.pem"
	I0203 11:05:16.641907   12544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5452.pem
	I0203 11:05:16.649407   12544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:45 /usr/share/ca-certificates/5452.pem
	I0203 11:05:16.657451   12544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5452.pem
	I0203 11:05:16.674812   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5452.pem /etc/ssl/certs/51391683.0"
	I0203 11:05:16.703901   12544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 11:05:16.710585   12544 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0203 11:05:16.710879   12544 kubeadm.go:392] StartCluster: {Name:ha-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-429000 Namespace:default APIServerHAVIP:17
2.25.15.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.12.47 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:05:16.717461   12544 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0203 11:05:16.751642   12544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 11:05:16.783825   12544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 11:05:16.812308   12544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 11:05:16.829267   12544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 11:05:16.829267   12544 kubeadm.go:157] found existing configuration files:
	
	I0203 11:05:16.837433   12544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 11:05:16.853412   12544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 11:05:16.861702   12544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 11:05:16.888390   12544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 11:05:16.906674   12544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 11:05:16.915717   12544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 11:05:16.941516   12544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 11:05:16.958769   12544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 11:05:16.967678   12544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 11:05:16.993691   12544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 11:05:17.009296   12544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 11:05:17.017976   12544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 11:05:17.035480   12544 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0203 11:05:17.411347   12544 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 11:05:31.241346   12544 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0203 11:05:31.241469   12544 kubeadm.go:310] [preflight] Running pre-flight checks
	I0203 11:05:31.241607   12544 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 11:05:31.241849   12544 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 11:05:31.242104   12544 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0203 11:05:31.242241   12544 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 11:05:31.246026   12544 out.go:235]   - Generating certificates and keys ...
	I0203 11:05:31.246026   12544 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0203 11:05:31.247045   12544 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0203 11:05:31.247045   12544 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0203 11:05:31.247045   12544 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0203 11:05:31.247580   12544 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0203 11:05:31.247748   12544 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0203 11:05:31.247780   12544 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0203 11:05:31.247780   12544 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-429000 localhost] and IPs [172.25.12.47 127.0.0.1 ::1]
	I0203 11:05:31.247780   12544 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0203 11:05:31.248452   12544 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-429000 localhost] and IPs [172.25.12.47 127.0.0.1 ::1]
	I0203 11:05:31.248452   12544 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0203 11:05:31.248804   12544 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0203 11:05:31.248942   12544 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0203 11:05:31.249080   12544 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 11:05:31.249119   12544 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 11:05:31.249332   12544 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0203 11:05:31.249332   12544 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 11:05:31.249332   12544 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 11:05:31.249332   12544 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 11:05:31.249332   12544 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 11:05:31.249872   12544 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 11:05:31.253307   12544 out.go:235]   - Booting up control plane ...
	I0203 11:05:31.254284   12544 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 11:05:31.254546   12544 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 11:05:31.254757   12544 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 11:05:31.254903   12544 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 11:05:31.255115   12544 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 11:05:31.255115   12544 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0203 11:05:31.255402   12544 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0203 11:05:31.255676   12544 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0203 11:05:31.255859   12544 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002122061s
	I0203 11:05:31.256036   12544 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0203 11:05:31.256212   12544 kubeadm.go:310] [api-check] The API server is healthy after 7.501871328s
	I0203 11:05:31.256364   12544 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0203 11:05:31.256674   12544 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0203 11:05:31.256867   12544 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0203 11:05:31.257003   12544 kubeadm.go:310] [mark-control-plane] Marking the node ha-429000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0203 11:05:31.257322   12544 kubeadm.go:310] [bootstrap-token] Using token: 35pwxs.9cd3az0fhrerr81u
	I0203 11:05:31.259948   12544 out.go:235]   - Configuring RBAC rules ...
	I0203 11:05:31.260626   12544 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0203 11:05:31.260844   12544 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0203 11:05:31.261043   12544 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0203 11:05:31.261363   12544 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0203 11:05:31.261643   12544 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0203 11:05:31.261839   12544 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0203 11:05:31.261955   12544 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0203 11:05:31.261955   12544 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0203 11:05:31.261955   12544 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0203 11:05:31.261955   12544 kubeadm.go:310] 
	I0203 11:05:31.261955   12544 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0203 11:05:31.261955   12544 kubeadm.go:310] 
	I0203 11:05:31.262632   12544 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0203 11:05:31.262632   12544 kubeadm.go:310] 
	I0203 11:05:31.262632   12544 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0203 11:05:31.262632   12544 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0203 11:05:31.262956   12544 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0203 11:05:31.263076   12544 kubeadm.go:310] 
	I0203 11:05:31.263267   12544 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0203 11:05:31.263267   12544 kubeadm.go:310] 
	I0203 11:05:31.263377   12544 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0203 11:05:31.263377   12544 kubeadm.go:310] 
	I0203 11:05:31.263482   12544 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0203 11:05:31.263652   12544 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0203 11:05:31.263872   12544 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0203 11:05:31.263917   12544 kubeadm.go:310] 
	I0203 11:05:31.264007   12544 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0203 11:05:31.264007   12544 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0203 11:05:31.264007   12544 kubeadm.go:310] 
	I0203 11:05:31.264007   12544 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 35pwxs.9cd3az0fhrerr81u \
	I0203 11:05:31.264645   12544 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce \
	I0203 11:05:31.264690   12544 kubeadm.go:310] 	--control-plane 
	I0203 11:05:31.264690   12544 kubeadm.go:310] 
	I0203 11:05:31.264908   12544 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0203 11:05:31.264908   12544 kubeadm.go:310] 
	I0203 11:05:31.265121   12544 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 35pwxs.9cd3az0fhrerr81u \
	I0203 11:05:31.265337   12544 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce 
	I0203 11:05:31.265337   12544 cni.go:84] Creating CNI manager for ""
	I0203 11:05:31.265337   12544 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0203 11:05:31.269437   12544 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0203 11:05:31.281180   12544 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0203 11:05:31.288899   12544 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0203 11:05:31.288899   12544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0203 11:05:31.337415   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0203 11:05:31.908795   12544 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0203 11:05:31.919247   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-429000 minikube.k8s.io/updated_at=2025_02_03T11_05_31_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d minikube.k8s.io/name=ha-429000 minikube.k8s.io/primary=true
	I0203 11:05:31.919917   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:05:31.967537   12544 ops.go:34] apiserver oom_adj: -16
	I0203 11:05:32.188604   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:05:32.688247   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:05:33.189805   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:05:33.688396   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:05:34.190477   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:05:34.690001   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:05:35.188629   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:05:35.316485   12544 kubeadm.go:1113] duration metric: took 3.4076514s to wait for elevateKubeSystemPrivileges
	I0203 11:05:35.316485   12544 kubeadm.go:394] duration metric: took 18.6053941s to StartCluster
	I0203 11:05:35.316485   12544 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:05:35.316485   12544 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 11:05:35.318838   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:05:35.319962   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0203 11:05:35.320109   12544 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.25.12.47 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 11:05:35.320109   12544 start.go:241] waiting for startup goroutines ...
	I0203 11:05:35.320109   12544 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0203 11:05:35.320332   12544 addons.go:69] Setting storage-provisioner=true in profile "ha-429000"
	I0203 11:05:35.320332   12544 addons.go:69] Setting default-storageclass=true in profile "ha-429000"
	I0203 11:05:35.320332   12544 addons.go:238] Setting addon storage-provisioner=true in "ha-429000"
	I0203 11:05:35.320332   12544 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-429000"
	I0203 11:05:35.320437   12544 host.go:66] Checking if "ha-429000" exists ...
	I0203 11:05:35.320554   12544 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:05:35.321398   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:05:35.321732   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:05:35.482552   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0203 11:05:35.962598   12544 start.go:971] {"host.minikube.internal": 172.25.0.1} host record injected into CoreDNS's ConfigMap
	I0203 11:05:37.406737   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:05:37.406737   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:37.409403   12544 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 11:05:37.411525   12544 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 11:05:37.411525   12544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0203 11:05:37.411525   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:05:37.417806   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:05:37.417806   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:37.418706   12544 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 11:05:37.418706   12544 kapi.go:59] client config for ha-429000: &rest.Config{Host:"https://172.25.15.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-429000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-429000\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x219e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 11:05:37.420544   12544 cert_rotation.go:140] Starting client certificate rotation controller
	I0203 11:05:37.421259   12544 addons.go:238] Setting addon default-storageclass=true in "ha-429000"
	I0203 11:05:37.421259   12544 host.go:66] Checking if "ha-429000" exists ...
	I0203 11:05:37.421927   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:05:39.510253   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:05:39.510253   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:39.510253   12544 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0203 11:05:39.510253   12544 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0203 11:05:39.510253   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:05:39.555519   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:05:39.555519   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:39.555633   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:05:41.641418   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:05:41.641418   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:41.641418   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:05:42.339264   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:05:42.340282   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:42.340580   12544 sshutil.go:53] new ssh client: &{IP:172.25.12.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\id_rsa Username:docker}
	I0203 11:05:42.469873   12544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 11:05:44.088766   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:05:44.088880   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:44.088880   12544 sshutil.go:53] new ssh client: &{IP:172.25.12.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\id_rsa Username:docker}
	I0203 11:05:44.221173   12544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0203 11:05:44.428233   12544 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0203 11:05:44.428233   12544 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0203 11:05:44.429230   12544 round_trippers.go:463] GET https://172.25.15.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0203 11:05:44.429230   12544 round_trippers.go:469] Request Headers:
	I0203 11:05:44.429230   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:05:44.429230   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:05:44.442211   12544 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0203 11:05:44.443156   12544 round_trippers.go:463] PUT https://172.25.15.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0203 11:05:44.443156   12544 round_trippers.go:469] Request Headers:
	I0203 11:05:44.443215   12544 round_trippers.go:473]     Content-Type: application/json
	I0203 11:05:44.443215   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:05:44.443215   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:05:44.446920   12544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 11:05:44.450111   12544 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0203 11:05:44.452534   12544 addons.go:514] duration metric: took 9.1323212s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0203 11:05:44.452597   12544 start.go:246] waiting for cluster config update ...
	I0203 11:05:44.452597   12544 start.go:255] writing updated cluster config ...
	I0203 11:05:44.455048   12544 out.go:201] 
	I0203 11:05:44.468415   12544 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:05:44.468415   12544 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\config.json ...
	I0203 11:05:44.473413   12544 out.go:177] * Starting "ha-429000-m02" control-plane node in "ha-429000" cluster
	I0203 11:05:44.475414   12544 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 11:05:44.475414   12544 cache.go:56] Caching tarball of preloaded images
	I0203 11:05:44.475414   12544 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 11:05:44.475414   12544 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0203 11:05:44.476410   12544 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\config.json ...
	I0203 11:05:44.485409   12544 start.go:360] acquireMachinesLock for ha-429000-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 11:05:44.485409   12544 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-429000-m02"
	I0203 11:05:44.485409   12544 start.go:93] Provisioning new machine with config: &{Name:ha-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-429000 Namespace:def
ault APIServerHAVIP:172.25.15.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.12.47 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:
\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 11:05:44.485409   12544 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0203 11:05:44.488419   12544 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0203 11:05:44.488419   12544 start.go:159] libmachine.API.Create for "ha-429000" (driver="hyperv")
	I0203 11:05:44.488419   12544 client.go:168] LocalClient.Create starting
	I0203 11:05:44.489420   12544 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0203 11:05:44.489420   12544 main.go:141] libmachine: Decoding PEM data...
	I0203 11:05:44.489420   12544 main.go:141] libmachine: Parsing certificate...
	I0203 11:05:44.489420   12544 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0203 11:05:44.489420   12544 main.go:141] libmachine: Decoding PEM data...
	I0203 11:05:44.489420   12544 main.go:141] libmachine: Parsing certificate...
	I0203 11:05:44.489420   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0203 11:05:46.268227   12544 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0203 11:05:46.268227   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:46.269172   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0203 11:05:47.875039   12544 main.go:141] libmachine: [stdout =====>] : False
	
	I0203 11:05:47.875784   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:47.875784   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0203 11:05:49.264588   12544 main.go:141] libmachine: [stdout =====>] : True
	
	I0203 11:05:49.264588   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:49.265181   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0203 11:05:52.666998   12544 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0203 11:05:52.666998   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:52.668815   12544 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0203 11:05:53.079625   12544 main.go:141] libmachine: Creating SSH key...
	I0203 11:05:53.171177   12544 main.go:141] libmachine: Creating VM...
	I0203 11:05:53.172172   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0203 11:05:55.798887   12544 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0203 11:05:55.799494   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:55.799589   12544 main.go:141] libmachine: Using switch "Default Switch"
	I0203 11:05:55.799589   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0203 11:05:57.424027   12544 main.go:141] libmachine: [stdout =====>] : True
	
	I0203 11:05:57.424027   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:57.424844   12544 main.go:141] libmachine: Creating VHD
	I0203 11:05:57.424889   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0203 11:06:00.998300   12544 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E66ADEE4-F243-4E9B-A93D-4BA9DC2A0585
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0203 11:06:00.998300   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:00.998300   12544 main.go:141] libmachine: Writing magic tar header
	I0203 11:06:00.998415   12544 main.go:141] libmachine: Writing SSH key tar header
	I0203 11:06:01.011278   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0203 11:06:04.062923   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:06:04.062923   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:04.063724   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\disk.vhd' -SizeBytes 20000MB
	I0203 11:06:06.466753   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:06:06.466753   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:06.467817   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-429000-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0203 11:06:09.835625   12544 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-429000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0203 11:06:09.836455   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:09.836527   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-429000-m02 -DynamicMemoryEnabled $false
	I0203 11:06:11.963374   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:06:11.963426   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:11.963426   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-429000-m02 -Count 2
	I0203 11:06:14.018852   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:06:14.018852   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:14.019720   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-429000-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\boot2docker.iso'
	I0203 11:06:16.440954   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:06:16.440954   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:16.440954   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-429000-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\disk.vhd'
	I0203 11:06:18.884676   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:06:18.885271   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:18.885271   12544 main.go:141] libmachine: Starting VM...
	I0203 11:06:18.885353   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-429000-m02
	I0203 11:06:21.803470   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:06:21.804468   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:21.804468   12544 main.go:141] libmachine: Waiting for host to start...
	I0203 11:06:21.804521   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:06:23.905607   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:06:23.905607   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:23.906338   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:06:26.246229   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:06:26.246329   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:27.247298   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:06:29.250137   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:06:29.250318   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:29.250318   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:06:31.536942   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:06:31.536942   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:32.538368   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:06:34.563022   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:06:34.563964   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:34.564056   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:06:36.877834   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:06:36.877834   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:37.878527   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:06:39.889847   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:06:39.889847   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:39.889847   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:06:42.183172   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:06:42.183240   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:43.184264   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:06:45.196955   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:06:45.196955   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:45.197965   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:06:47.580757   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:06:47.580757   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:47.580757   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:06:49.547661   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:06:49.548047   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:49.548047   12544 machine.go:93] provisionDockerMachine start ...
	I0203 11:06:49.548140   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:06:51.547018   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:06:51.547018   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:51.547018   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:06:53.932273   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:06:53.932273   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:53.937350   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:06:53.950248   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.13.142 22 <nil> <nil>}
	I0203 11:06:53.950248   12544 main.go:141] libmachine: About to run SSH command:
	hostname
	I0203 11:06:54.084372   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0203 11:06:54.084372   12544 buildroot.go:166] provisioning hostname "ha-429000-m02"
	I0203 11:06:54.084372   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:06:56.056531   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:06:56.056531   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:56.056629   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:06:58.365371   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:06:58.365371   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:58.370098   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:06:58.370585   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.13.142 22 <nil> <nil>}
	I0203 11:06:58.370585   12544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-429000-m02 && echo "ha-429000-m02" | sudo tee /etc/hostname
	I0203 11:06:58.533727   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-429000-m02
	
	I0203 11:06:58.533843   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:00.489467   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:00.489467   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:00.489467   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:02.835448   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:02.836192   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:02.839971   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:07:02.840404   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.13.142 22 <nil> <nil>}
	I0203 11:07:02.840404   12544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-429000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-429000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-429000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 11:07:02.997122   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 11:07:02.997122   12544 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0203 11:07:02.997122   12544 buildroot.go:174] setting up certificates
	I0203 11:07:02.997122   12544 provision.go:84] configureAuth start
	I0203 11:07:02.997122   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:04.993734   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:04.993734   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:04.993950   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:07.371247   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:07.372137   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:07.372196   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:09.338927   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:09.339437   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:09.339437   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:11.706190   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:11.707195   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:11.707336   12544 provision.go:143] copyHostCerts
	I0203 11:07:11.707336   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0203 11:07:11.707336   12544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0203 11:07:11.707336   12544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0203 11:07:11.708017   12544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0203 11:07:11.708607   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0203 11:07:11.708607   12544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0203 11:07:11.708607   12544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0203 11:07:11.709222   12544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0203 11:07:11.709883   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0203 11:07:11.710061   12544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0203 11:07:11.710061   12544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0203 11:07:11.710355   12544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0203 11:07:11.711101   12544 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-429000-m02 san=[127.0.0.1 172.25.13.142 ha-429000-m02 localhost minikube]
	I0203 11:07:11.952210   12544 provision.go:177] copyRemoteCerts
	I0203 11:07:11.960728   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 11:07:11.960799   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:13.883430   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:13.883430   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:13.883430   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:16.249359   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:16.249359   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:16.249726   12544 sshutil.go:53] new ssh client: &{IP:172.25.13.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\id_rsa Username:docker}
	I0203 11:07:16.357054   12544 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3962758s)
	I0203 11:07:16.357137   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0203 11:07:16.357495   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0203 11:07:16.403693   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0203 11:07:16.403856   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0203 11:07:16.449050   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0203 11:07:16.449457   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0203 11:07:16.495449   12544 provision.go:87] duration metric: took 13.4981727s to configureAuth
	I0203 11:07:16.495449   12544 buildroot.go:189] setting minikube options for container-runtime
	I0203 11:07:16.496297   12544 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:07:16.496297   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:18.495945   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:18.496575   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:18.496761   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:20.869730   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:20.869785   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:20.873967   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:07:20.873967   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.13.142 22 <nil> <nil>}
	I0203 11:07:20.873967   12544 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 11:07:21.007325   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0203 11:07:21.007393   12544 buildroot.go:70] root file system type: tmpfs
	I0203 11:07:21.007393   12544 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 11:07:21.007393   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:22.963472   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:22.963472   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:22.964349   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:25.316955   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:25.318022   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:25.322261   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:07:25.322261   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.13.142 22 <nil> <nil>}
	I0203 11:07:25.322787   12544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.12.47"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 11:07:25.486402   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.12.47
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 11:07:25.486514   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:27.448291   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:27.448291   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:27.448381   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:29.808286   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:29.808286   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:29.813385   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:07:29.813786   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.13.142 22 <nil> <nil>}
	I0203 11:07:29.813786   12544 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 11:07:32.016735   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0203 11:07:32.016798   12544 machine.go:96] duration metric: took 42.4682663s to provisionDockerMachine
	I0203 11:07:32.016798   12544 client.go:171] duration metric: took 1m47.5271525s to LocalClient.Create
	I0203 11:07:32.016798   12544 start.go:167] duration metric: took 1m47.5271525s to libmachine.API.Create "ha-429000"
	I0203 11:07:32.016869   12544 start.go:293] postStartSetup for "ha-429000-m02" (driver="hyperv")
	I0203 11:07:32.016869   12544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 11:07:32.024681   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 11:07:32.024681   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:33.952025   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:33.952025   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:33.953006   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:36.328038   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:36.328192   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:36.328770   12544 sshutil.go:53] new ssh client: &{IP:172.25.13.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\id_rsa Username:docker}
	I0203 11:07:36.433565   12544 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4088342s)
	I0203 11:07:36.443143   12544 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 11:07:36.450301   12544 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 11:07:36.450301   12544 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0203 11:07:36.450301   12544 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0203 11:07:36.451562   12544 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> 54522.pem in /etc/ssl/certs
	I0203 11:07:36.451667   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /etc/ssl/certs/54522.pem
	I0203 11:07:36.463167   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 11:07:36.481285   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /etc/ssl/certs/54522.pem (1708 bytes)
	I0203 11:07:36.531887   12544 start.go:296] duration metric: took 4.5148747s for postStartSetup
	I0203 11:07:36.533915   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:38.496867   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:38.497376   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:38.497534   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:40.848094   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:40.848094   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:40.849108   12544 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\config.json ...
	I0203 11:07:40.851073   12544 start.go:128] duration metric: took 1m56.3643371s to createHost
	I0203 11:07:40.851179   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:42.798664   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:42.798664   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:42.798752   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:45.107269   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:45.107269   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:45.111945   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:07:45.112319   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.13.142 22 <nil> <nil>}
	I0203 11:07:45.112319   12544 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0203 11:07:45.249328   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738580865.264456433
	
	I0203 11:07:45.249328   12544 fix.go:216] guest clock: 1738580865.264456433
	I0203 11:07:45.249405   12544 fix.go:229] Guest: 2025-02-03 11:07:45.264456433 +0000 UTC Remote: 2025-02-03 11:07:40.8510736 +0000 UTC m=+304.312267301 (delta=4.413382833s)
	I0203 11:07:45.249477   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:47.197567   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:47.197567   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:47.197567   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:49.579785   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:49.579785   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:49.586783   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:07:49.587236   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.13.142 22 <nil> <nil>}
	I0203 11:07:49.587309   12544 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1738580865
	I0203 11:07:49.730869   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb  3 11:07:45 UTC 2025
	
	I0203 11:07:49.730922   12544 fix.go:236] clock set: Mon Feb  3 11:07:45 UTC 2025
	 (err=<nil>)
	I0203 11:07:49.730922   12544 start.go:83] releasing machines lock for "ha-429000-m02", held for 2m5.2440843s
	I0203 11:07:49.731097   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:51.681462   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:51.681462   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:51.681462   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:54.024421   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:54.024421   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:54.027485   12544 out.go:177] * Found network options:
	I0203 11:07:54.030787   12544 out.go:177]   - NO_PROXY=172.25.12.47
	W0203 11:07:54.033044   12544 proxy.go:119] fail to check proxy env: Error ip not in block
	I0203 11:07:54.036070   12544 out.go:177]   - NO_PROXY=172.25.12.47
	W0203 11:07:54.038068   12544 proxy.go:119] fail to check proxy env: Error ip not in block
	W0203 11:07:54.040119   12544 proxy.go:119] fail to check proxy env: Error ip not in block
	I0203 11:07:54.042352   12544 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0203 11:07:54.042500   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:54.050168   12544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0203 11:07:54.050168   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:56.053155   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:56.053258   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:56.053317   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:56.053317   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:56.053317   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:56.053317   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:58.447051   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:58.447104   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:58.447104   12544 sshutil.go:53] new ssh client: &{IP:172.25.13.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\id_rsa Username:docker}
	I0203 11:07:58.465429   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:58.465429   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:58.465429   12544 sshutil.go:53] new ssh client: &{IP:172.25.13.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\id_rsa Username:docker}
	I0203 11:07:58.557101   12544 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5068061s)
	W0203 11:07:58.557184   12544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 11:07:58.565319   12544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 11:07:58.567403   12544 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.5249998s)
	W0203 11:07:58.567403   12544 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0203 11:07:58.593658   12544 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0203 11:07:58.593658   12544 start.go:495] detecting cgroup driver to use...
	I0203 11:07:58.593963   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 11:07:58.642037   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0203 11:07:58.667727   12544 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0203 11:07:58.667727   12544 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0203 11:07:58.668727   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0203 11:07:58.693518   12544 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 11:07:58.701095   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0203 11:07:58.728210   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 11:07:58.756282   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 11:07:58.783590   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 11:07:58.811673   12544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 11:07:58.838749   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 11:07:58.866041   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0203 11:07:58.893910   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0203 11:07:58.921814   12544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 11:07:58.938925   12544 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 11:07:58.947564   12544 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0203 11:07:58.978806   12544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 11:07:59.003249   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:07:59.195975   12544 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 11:07:59.227982   12544 start.go:495] detecting cgroup driver to use...
	I0203 11:07:59.237133   12544 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 11:07:59.266974   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 11:07:59.301437   12544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 11:07:59.334862   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 11:07:59.368201   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 11:07:59.400062   12544 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0203 11:07:59.462702   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 11:07:59.491052   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 11:07:59.534019   12544 ssh_runner.go:195] Run: which cri-dockerd
	I0203 11:07:59.551818   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0203 11:07:59.570195   12544 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0203 11:07:59.612847   12544 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 11:07:59.797009   12544 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 11:07:59.970334   12544 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 11:07:59.970334   12544 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0203 11:08:00.011940   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:08:00.207325   12544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 11:08:02.797686   12544 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5903313s)
	I0203 11:08:02.806053   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0203 11:08:02.837050   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 11:08:02.867054   12544 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0203 11:08:03.059012   12544 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 11:08:03.247004   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:08:03.444558   12544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0203 11:08:03.483681   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 11:08:03.515443   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:08:03.709902   12544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0203 11:08:03.816150   12544 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0203 11:08:03.823468   12544 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0203 11:08:03.832753   12544 start.go:563] Will wait 60s for crictl version
	I0203 11:08:03.841213   12544 ssh_runner.go:195] Run: which crictl
	I0203 11:08:03.854128   12544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 11:08:03.903145   12544 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0203 11:08:03.910121   12544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 11:08:03.952117   12544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 11:08:03.990898   12544 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0203 11:08:03.993580   12544 out.go:177]   - env NO_PROXY=172.25.12.47
	I0203 11:08:03.997644   12544 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0203 11:08:04.001233   12544 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0203 11:08:04.001233   12544 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0203 11:08:04.001233   12544 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0203 11:08:04.001754   12544 ip.go:211] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:37:32:ac Flags:up|broadcast|multicast|running}
	I0203 11:08:04.004033   12544 ip.go:214] interface addr: fe80::c77d:5c4b:3bd9:9577/64
	I0203 11:08:04.004033   12544 ip.go:214] interface addr: 172.25.0.1/20
	I0203 11:08:04.011029   12544 ssh_runner.go:195] Run: grep 172.25.0.1	host.minikube.internal$ /etc/hosts
	I0203 11:08:04.017054   12544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:08:04.039130   12544 mustload.go:65] Loading cluster: ha-429000
	I0203 11:08:04.039414   12544 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:08:04.040065   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:08:06.028951   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:08:06.029112   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:08:06.029112   12544 host.go:66] Checking if "ha-429000" exists ...
	I0203 11:08:06.029833   12544 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000 for IP: 172.25.13.142
	I0203 11:08:06.029833   12544 certs.go:194] generating shared ca certs ...
	I0203 11:08:06.029833   12544 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:08:06.030349   12544 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0203 11:08:06.030610   12544 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0203 11:08:06.030610   12544 certs.go:256] generating profile certs ...
	I0203 11:08:06.031234   12544 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\client.key
	I0203 11:08:06.031347   12544 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.579642fa
	I0203 11:08:06.031505   12544 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.579642fa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.12.47 172.25.13.142 172.25.15.254]
	I0203 11:08:06.211013   12544 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.579642fa ...
	I0203 11:08:06.211013   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.579642fa: {Name:mk49e737d3682f472190d3b64ef4f7e34ffb5ac8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:08:06.212020   12544 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.579642fa ...
	I0203 11:08:06.212020   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.579642fa: {Name:mkf4bcf3e40665551dd559d734fad4d6a11f8ab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:08:06.213021   12544 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.579642fa -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt
	I0203 11:08:06.229255   12544 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.579642fa -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key
	I0203 11:08:06.230200   12544 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.key
	I0203 11:08:06.230200   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0203 11:08:06.230200   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0203 11:08:06.230200   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0203 11:08:06.230200   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0203 11:08:06.230200   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0203 11:08:06.230200   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0203 11:08:06.231447   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0203 11:08:06.231567   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0203 11:08:06.231751   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem (1338 bytes)
	W0203 11:08:06.231751   12544 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452_empty.pem, impossibly tiny 0 bytes
	I0203 11:08:06.231751   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0203 11:08:06.232334   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0203 11:08:06.232334   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0203 11:08:06.232334   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0203 11:08:06.232941   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem (1708 bytes)
	I0203 11:08:06.232992   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem -> /usr/share/ca-certificates/5452.pem
	I0203 11:08:06.232992   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /usr/share/ca-certificates/54522.pem
	I0203 11:08:06.232992   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:08:06.232992   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:08:08.173759   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:08:08.173759   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:08:08.173851   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:08:10.506259   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:08:10.506259   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:08:10.509933   12544 sshutil.go:53] new ssh client: &{IP:172.25.12.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\id_rsa Username:docker}
	I0203 11:08:10.615797   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0203 11:08:10.624564   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0203 11:08:10.651460   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0203 11:08:10.657788   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0203 11:08:10.687374   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0203 11:08:10.694707   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0203 11:08:10.727906   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0203 11:08:10.734514   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0203 11:08:10.768974   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0203 11:08:10.775832   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0203 11:08:10.804704   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0203 11:08:10.811887   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0203 11:08:10.831437   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 11:08:10.878274   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0203 11:08:10.926494   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 11:08:10.977012   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0203 11:08:11.020231   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0203 11:08:11.065107   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0203 11:08:11.111806   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 11:08:11.156926   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0203 11:08:11.202611   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem --> /usr/share/ca-certificates/5452.pem (1338 bytes)
	I0203 11:08:11.247297   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /usr/share/ca-certificates/54522.pem (1708 bytes)
	I0203 11:08:11.290015   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 11:08:11.332513   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0203 11:08:11.361800   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0203 11:08:11.391668   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0203 11:08:11.420466   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0203 11:08:11.450675   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0203 11:08:11.479542   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0203 11:08:11.509765   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0203 11:08:11.549139   12544 ssh_runner.go:195] Run: openssl version
	I0203 11:08:11.566518   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5452.pem && ln -fs /usr/share/ca-certificates/5452.pem /etc/ssl/certs/5452.pem"
	I0203 11:08:11.592577   12544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5452.pem
	I0203 11:08:11.599741   12544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:45 /usr/share/ca-certificates/5452.pem
	I0203 11:08:11.608854   12544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5452.pem
	I0203 11:08:11.625840   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5452.pem /etc/ssl/certs/51391683.0"
	I0203 11:08:11.654495   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54522.pem && ln -fs /usr/share/ca-certificates/54522.pem /etc/ssl/certs/54522.pem"
	I0203 11:08:11.682524   12544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54522.pem
	I0203 11:08:11.689386   12544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:45 /usr/share/ca-certificates/54522.pem
	I0203 11:08:11.698542   12544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54522.pem
	I0203 11:08:11.715902   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/54522.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 11:08:11.743475   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 11:08:11.772598   12544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:08:11.779579   12544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:08:11.787380   12544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:08:11.807036   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 11:08:11.843230   12544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 11:08:11.852951   12544 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0203 11:08:11.853079   12544 kubeadm.go:934] updating node {m02 172.25.13.142 8443 v1.32.1 docker true true} ...
	I0203 11:08:11.853079   12544 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-429000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.13.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-429000 Namespace:default APIServerHAVIP:172.25.15.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0203 11:08:11.853079   12544 kube-vip.go:115] generating kube-vip config ...
	I0203 11:08:11.860986   12544 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0203 11:08:11.891732   12544 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0203 11:08:11.891732   12544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.15.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0203 11:08:11.900561   12544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0203 11:08:11.919500   12544 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.1': No such file or directory
	
	Initiating transfer...
	I0203 11:08:11.927755   12544 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.1
	I0203 11:08:11.948610   12544 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl
	I0203 11:08:11.948680   12544 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet
	I0203 11:08:11.948680   12544 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm
	I0203 11:08:13.017768   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl -> /var/lib/minikube/binaries/v1.32.1/kubectl
	I0203 11:08:13.027840   12544 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl
	I0203 11:08:13.033886   12544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubectl': No such file or directory
	I0203 11:08:13.033886   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl --> /var/lib/minikube/binaries/v1.32.1/kubectl (57323672 bytes)
	I0203 11:08:13.091915   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm -> /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0203 11:08:13.099901   12544 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0203 11:08:13.171977   12544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubeadm': No such file or directory
	I0203 11:08:13.172155   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm --> /var/lib/minikube/binaries/v1.32.1/kubeadm (70942872 bytes)
	I0203 11:08:13.636731   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:08:13.692751   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet -> /var/lib/minikube/binaries/v1.32.1/kubelet
	I0203 11:08:13.700750   12544 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet
	I0203 11:08:13.722865   12544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubelet': No such file or directory
	I0203 11:08:13.723025   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet --> /var/lib/minikube/binaries/v1.32.1/kubelet (77398276 bytes)
	I0203 11:08:14.228469   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0203 11:08:14.247548   12544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0203 11:08:14.280195   12544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 11:08:14.310962   12544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0203 11:08:14.349516   12544 ssh_runner.go:195] Run: grep 172.25.15.254	control-plane.minikube.internal$ /etc/hosts
	I0203 11:08:14.356059   12544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.15.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:08:14.386740   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:08:14.580038   12544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:08:14.614328   12544 host.go:66] Checking if "ha-429000" exists ...
	I0203 11:08:14.615277   12544 start.go:317] joinCluster: &{Name:ha-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-429000 Namespace:default APIServerHAVIP:172.
25.15.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.12.47 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.13.142 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenk
ins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:08:14.615511   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0203 11:08:14.615626   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:08:16.573309   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:08:16.573390   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:08:16.573476   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:08:18.925157   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:08:18.925157   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:08:18.925855   12544 sshutil.go:53] new ssh client: &{IP:172.25.12.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\id_rsa Username:docker}
	I0203 11:08:19.317985   12544 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.7023148s)
	I0203 11:08:19.317985   12544 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.25.13.142 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 11:08:19.317985   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xi17re.n6bazw697qvc86yk --discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-429000-m02 --control-plane --apiserver-advertise-address=172.25.13.142 --apiserver-bind-port=8443"
	I0203 11:08:59.562181   12544 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xi17re.n6bazw697qvc86yk --discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-429000-m02 --control-plane --apiserver-advertise-address=172.25.13.142 --apiserver-bind-port=8443": (40.2437373s)
	I0203 11:08:59.562181   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0203 11:09:00.303269   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-429000-m02 minikube.k8s.io/updated_at=2025_02_03T11_09_00_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d minikube.k8s.io/name=ha-429000 minikube.k8s.io/primary=false
	I0203 11:09:00.463893   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-429000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0203 11:09:00.612956   12544 start.go:319] duration metric: took 45.9971639s to joinCluster
	I0203 11:09:00.613149   12544 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.25.13.142 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 11:09:00.613672   12544 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:09:00.615311   12544 out.go:177] * Verifying Kubernetes components...
	I0203 11:09:00.626684   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:09:00.951656   12544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:09:00.976442   12544 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 11:09:00.976759   12544 kapi.go:59] client config for ha-429000: &rest.Config{Host:"https://172.25.15.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-429000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-429000\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x219e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0203 11:09:00.976759   12544 kubeadm.go:483] Overriding stale ClientConfig host https://172.25.15.254:8443 with https://172.25.12.47:8443
	I0203 11:09:00.977330   12544 node_ready.go:35] waiting up to 6m0s for node "ha-429000-m02" to be "Ready" ...
	I0203 11:09:00.977942   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:00.977942   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:00.978001   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:00.978001   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:01.012197   12544 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0203 11:09:01.477609   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:01.477609   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:01.477609   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:01.477609   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:01.484490   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:09:01.977451   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:01.977451   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:01.977451   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:01.977451   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:01.982874   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:02.478303   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:02.478303   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:02.478303   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:02.478303   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:02.483307   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:02.978527   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:02.978527   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:02.978527   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:02.978527   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:02.983875   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:02.985352   12544 node_ready.go:53] node "ha-429000-m02" has status "Ready":"False"
	I0203 11:09:03.477810   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:03.477810   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:03.477810   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:03.477810   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:03.482919   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:03.977582   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:03.977582   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:03.977582   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:03.977582   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:03.982585   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:04.479202   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:04.479202   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:04.479202   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:04.479202   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:04.585675   12544 round_trippers.go:574] Response Status: 200 OK in 106 milliseconds
	I0203 11:09:04.978034   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:04.978034   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:04.978034   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:04.978034   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:04.982038   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:05.478067   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:05.478067   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:05.478067   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:05.478067   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:05.483520   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:05.484071   12544 node_ready.go:53] node "ha-429000-m02" has status "Ready":"False"
	I0203 11:09:05.978365   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:05.978365   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:05.978365   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:05.978365   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:05.983356   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:06.478615   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:06.478615   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:06.478615   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:06.478615   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:06.777587   12544 round_trippers.go:574] Response Status: 200 OK in 298 milliseconds
	I0203 11:09:06.978982   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:06.978982   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:06.978982   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:06.978982   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:06.985357   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:09:07.477596   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:07.477596   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:07.477596   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:07.477596   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:07.500007   12544 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0203 11:09:07.501094   12544 node_ready.go:53] node "ha-429000-m02" has status "Ready":"False"
	I0203 11:09:07.977556   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:07.977556   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:07.977556   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:07.977556   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:07.983217   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:08.477799   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:08.477799   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:08.477799   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:08.477799   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:08.484171   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:09:08.977721   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:08.978145   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:08.978145   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:08.978145   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:08.984619   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:09.477927   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:09.477927   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:09.477927   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:09.477927   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:09.484987   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:09:09.977537   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:09.977537   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:09.977537   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:09.977537   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:09.983291   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:09.984139   12544 node_ready.go:53] node "ha-429000-m02" has status "Ready":"False"
	I0203 11:09:10.478519   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:10.478519   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:10.478519   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:10.478519   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:10.484455   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:10.978790   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:10.978790   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:10.978790   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:10.978790   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:10.984182   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:11.478120   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:11.478120   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:11.478120   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:11.478120   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:11.482246   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:11.978503   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:11.978571   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:11.978571   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:11.978571   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:11.983849   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:11.985708   12544 node_ready.go:53] node "ha-429000-m02" has status "Ready":"False"
	I0203 11:09:12.478333   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:12.478333   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:12.478333   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:12.478333   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:12.483669   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:12.978840   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:12.978840   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:12.978840   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:12.978840   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:12.983431   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:13.478161   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:13.478280   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:13.478280   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:13.478280   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:13.481913   12544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 11:09:13.978178   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:13.978178   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:13.978178   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:13.978178   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:13.984477   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:09:14.477979   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:14.477979   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:14.477979   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:14.477979   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:14.482714   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:14.483772   12544 node_ready.go:53] node "ha-429000-m02" has status "Ready":"False"
	I0203 11:09:14.978423   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:14.978423   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:14.978423   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:14.978423   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:14.988814   12544 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0203 11:09:15.478074   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:15.478074   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:15.478074   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:15.478074   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:15.484085   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:09:15.978467   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:15.978467   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:15.978467   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:15.978467   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:15.984069   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:16.478028   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:16.478028   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:16.478028   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:16.478028   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:16.483811   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:16.484586   12544 node_ready.go:53] node "ha-429000-m02" has status "Ready":"False"
	I0203 11:09:16.978002   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:16.978474   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:16.978545   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:16.978545   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:16.983820   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:17.478312   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:17.478312   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:17.478312   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:17.478312   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:17.483577   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:17.977996   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:17.977996   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:17.977996   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:17.977996   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:17.982906   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:18.478378   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:18.478378   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:18.478588   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:18.478588   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:18.484225   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:18.485182   12544 node_ready.go:53] node "ha-429000-m02" has status "Ready":"False"
	I0203 11:09:18.978535   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:18.978535   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:18.978535   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:18.978535   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:18.984135   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:18.984874   12544 node_ready.go:49] node "ha-429000-m02" has status "Ready":"True"
	I0203 11:09:18.984874   12544 node_ready.go:38] duration metric: took 18.0073382s for node "ha-429000-m02" to be "Ready" ...
	I0203 11:09:18.984951   12544 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 11:09:18.985085   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods
	I0203 11:09:18.985085   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:18.985151   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:18.985151   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:18.994876   12544 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0203 11:09:19.003718   12544 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-5jzvf" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.003718   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-5jzvf
	I0203 11:09:19.003718   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.003718   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.003718   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.008673   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:19.010275   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:09:19.010275   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.010275   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.010275   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.017296   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:09:19.017296   12544 pod_ready.go:93] pod "coredns-668d6bf9bc-5jzvf" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:19.017296   12544 pod_ready.go:82] duration metric: took 13.5772ms for pod "coredns-668d6bf9bc-5jzvf" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.017296   12544 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-r5pf5" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.018045   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-r5pf5
	I0203 11:09:19.018105   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.018105   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.018105   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.021834   12544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 11:09:19.022979   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:09:19.023034   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.023034   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.023034   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.026175   12544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 11:09:19.027185   12544 pod_ready.go:93] pod "coredns-668d6bf9bc-r5pf5" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:19.027185   12544 pod_ready.go:82] duration metric: took 9.8892ms for pod "coredns-668d6bf9bc-r5pf5" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.027185   12544 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.027264   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-429000
	I0203 11:09:19.027264   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.027264   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.027264   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.031133   12544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 11:09:19.031736   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:09:19.031736   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.031736   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.031806   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.035368   12544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 11:09:19.036120   12544 pod_ready.go:93] pod "etcd-ha-429000" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:19.036305   12544 pod_ready.go:82] duration metric: took 9.1195ms for pod "etcd-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.036344   12544 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.036533   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-429000-m02
	I0203 11:09:19.036878   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.036878   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.036878   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.046022   12544 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0203 11:09:19.046920   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:19.046950   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.046950   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.046990   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.050672   12544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 11:09:19.050672   12544 pod_ready.go:93] pod "etcd-ha-429000-m02" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:19.050672   12544 pod_ready.go:82] duration metric: took 14.3279ms for pod "etcd-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.050672   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.180221   12544 request.go:632] Waited for 129.5475ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-429000
	I0203 11:09:19.180448   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-429000
	I0203 11:09:19.180448   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.180448   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.180448   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.185731   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:19.379390   12544 request.go:632] Waited for 193.1917ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:09:19.379792   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:09:19.379792   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.379792   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.379792   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.385025   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:19.385627   12544 pod_ready.go:93] pod "kube-apiserver-ha-429000" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:19.385627   12544 pod_ready.go:82] duration metric: took 334.9511ms for pod "kube-apiserver-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.385719   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.579161   12544 request.go:632] Waited for 193.4399ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-429000-m02
	I0203 11:09:19.579161   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-429000-m02
	I0203 11:09:19.579161   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.579161   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.579161   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.584594   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:19.779333   12544 request.go:632] Waited for 193.733ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:19.779333   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:19.779333   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.779333   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.779333   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.786277   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:09:19.787082   12544 pod_ready.go:93] pod "kube-apiserver-ha-429000-m02" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:19.787156   12544 pod_ready.go:82] duration metric: took 401.4324ms for pod "kube-apiserver-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.787156   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.979168   12544 request.go:632] Waited for 191.9305ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-429000
	I0203 11:09:19.979168   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-429000
	I0203 11:09:19.979168   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.979168   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.979696   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.990469   12544 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0203 11:09:20.179276   12544 request.go:632] Waited for 187.9792ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:09:20.179569   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:09:20.179569   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:20.179569   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:20.179569   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:20.183876   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:20.185014   12544 pod_ready.go:93] pod "kube-controller-manager-ha-429000" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:20.185014   12544 pod_ready.go:82] duration metric: took 397.8537ms for pod "kube-controller-manager-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:20.185014   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:20.379406   12544 request.go:632] Waited for 194.2749ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-429000-m02
	I0203 11:09:20.379406   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-429000-m02
	I0203 11:09:20.379815   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:20.379815   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:20.379815   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:20.384424   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:20.578586   12544 request.go:632] Waited for 192.8918ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:20.578892   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:20.578892   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:20.578892   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:20.578892   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:20.588053   12544 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0203 11:09:20.589019   12544 pod_ready.go:93] pod "kube-controller-manager-ha-429000-m02" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:20.589086   12544 pod_ready.go:82] duration metric: took 404.0669ms for pod "kube-controller-manager-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:20.589086   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2n5cz" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:20.779020   12544 request.go:632] Waited for 189.8645ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2n5cz
	I0203 11:09:20.779327   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2n5cz
	I0203 11:09:20.779376   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:20.779376   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:20.779376   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:20.785463   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:09:20.979641   12544 request.go:632] Waited for 192.9365ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:20.980068   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:20.980120   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:20.980157   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:20.980157   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:20.987428   12544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 11:09:20.987428   12544 pod_ready.go:93] pod "kube-proxy-2n5cz" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:20.987428   12544 pod_ready.go:82] duration metric: took 398.3373ms for pod "kube-proxy-2n5cz" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:20.987428   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dhm6z" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:21.180890   12544 request.go:632] Waited for 193.4599ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dhm6z
	I0203 11:09:21.180890   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dhm6z
	I0203 11:09:21.180890   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:21.180890   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:21.180890   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:21.185018   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:21.378798   12544 request.go:632] Waited for 191.8209ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:09:21.378798   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:09:21.378798   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:21.378798   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:21.378798   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:21.383967   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:21.384593   12544 pod_ready.go:93] pod "kube-proxy-dhm6z" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:21.384658   12544 pod_ready.go:82] duration metric: took 397.2254ms for pod "kube-proxy-dhm6z" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:21.384658   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:21.578995   12544 request.go:632] Waited for 194.2189ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-429000
	I0203 11:09:21.578995   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-429000
	I0203 11:09:21.578995   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:21.578995   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:21.578995   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:21.584770   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:21.779527   12544 request.go:632] Waited for 194.0532ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:09:21.779527   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:09:21.779527   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:21.779527   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:21.779527   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:21.784744   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:21.785422   12544 pod_ready.go:93] pod "kube-scheduler-ha-429000" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:21.785422   12544 pod_ready.go:82] duration metric: took 400.7595ms for pod "kube-scheduler-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:21.785486   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:21.979297   12544 request.go:632] Waited for 193.7445ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-429000-m02
	I0203 11:09:21.979297   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-429000-m02
	I0203 11:09:21.979632   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:21.979632   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:21.979632   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:21.988347   12544 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0203 11:09:22.179047   12544 request.go:632] Waited for 189.9705ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:22.179047   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:22.179047   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:22.179047   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:22.179047   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:22.184463   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:22.185447   12544 pod_ready.go:93] pod "kube-scheduler-ha-429000-m02" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:22.185447   12544 pod_ready.go:82] duration metric: took 399.9566ms for pod "kube-scheduler-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:22.185447   12544 pod_ready.go:39] duration metric: took 3.2004594s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 11:09:22.185585   12544 api_server.go:52] waiting for apiserver process to appear ...
	I0203 11:09:22.193496   12544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:09:22.220750   12544 api_server.go:72] duration metric: took 21.6072861s to wait for apiserver process to appear ...
	I0203 11:09:22.220905   12544 api_server.go:88] waiting for apiserver healthz status ...
	I0203 11:09:22.220905   12544 api_server.go:253] Checking apiserver healthz at https://172.25.12.47:8443/healthz ...
	I0203 11:09:22.234051   12544 api_server.go:279] https://172.25.12.47:8443/healthz returned 200:
	ok
	I0203 11:09:22.234146   12544 round_trippers.go:463] GET https://172.25.12.47:8443/version
	I0203 11:09:22.234146   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:22.234146   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:22.234146   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:22.235747   12544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0203 11:09:22.236106   12544 api_server.go:141] control plane version: v1.32.1
	I0203 11:09:22.236136   12544 api_server.go:131] duration metric: took 15.2313ms to wait for apiserver health ...
	I0203 11:09:22.236136   12544 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 11:09:22.378779   12544 request.go:632] Waited for 142.587ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods
	I0203 11:09:22.379206   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods
	I0203 11:09:22.379296   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:22.379296   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:22.379296   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:22.386652   12544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 11:09:22.392471   12544 system_pods.go:59] 17 kube-system pods found
	I0203 11:09:22.393006   12544 system_pods.go:61] "coredns-668d6bf9bc-5jzvf" [171e3213-b687-432a-b3a3-231392dddfaf] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "coredns-668d6bf9bc-r5pf5" [34df0b8e-1ae4-4e3e-a39f-9d9c505a25c4] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "etcd-ha-429000" [8462336e-5775-446f-99ed-d5a46d8f85b0] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "etcd-ha-429000-m02" [26a3c348-6476-41c8-b1f0-b2d86f3b77a2] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kindnet-d7lbp" [23d86f41-7e30-4da8-924f-4c6aafb9360c] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kindnet-fv8r6" [58d47479-d8ac-4a8a-b5d7-7fc71319598b] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kube-apiserver-ha-429000" [a77b61c0-ca5b-4bf0-a0df-a3f7465c7cfc] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kube-apiserver-ha-429000-m02" [e3df904b-ddb6-4c43-9bd8-c35136520494] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kube-controller-manager-ha-429000" [df6cfc76-d0b4-4461-aa2e-cd44ebaec04a] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kube-controller-manager-ha-429000-m02" [89e18813-ac30-4890-a036-b86f0a9a513f] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kube-proxy-2n5cz" [aa6ffe60-2b46-473c-b2c4-b45004c6aeeb] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kube-proxy-dhm6z" [a2f4caab-ad59-402c-b3c8-3da356385c89] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kube-scheduler-ha-429000" [997f2cf9-4a89-40cd-9d8b-fece398c4a10] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kube-scheduler-ha-429000-m02" [e619bf3e-cb81-41a0-bfa8-c9f6506a356e] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kube-vip-ha-429000" [4907d066-bd93-4786-a868-9f3bd0a51f4b] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kube-vip-ha-429000-m02" [a53c671d-cc58-4505-901b-fe00af1f8eaa] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "storage-provisioner" [9cea8ac0-e49e-4a9b-8e99-2da32218657c] Running
	I0203 11:09:22.393006   12544 system_pods.go:74] duration metric: took 156.814ms to wait for pod list to return data ...
	I0203 11:09:22.393006   12544 default_sa.go:34] waiting for default service account to be created ...
	I0203 11:09:22.579267   12544 request.go:632] Waited for 186.1162ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/default/serviceaccounts
	I0203 11:09:22.579267   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/default/serviceaccounts
	I0203 11:09:22.579267   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:22.579267   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:22.579267   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:22.585682   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:22.585961   12544 default_sa.go:45] found service account: "default"
	I0203 11:09:22.585961   12544 default_sa.go:55] duration metric: took 192.9529ms for default service account to be created ...
	I0203 11:09:22.585961   12544 system_pods.go:116] waiting for k8s-apps to be running ...
	I0203 11:09:22.779593   12544 request.go:632] Waited for 193.5286ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods
	I0203 11:09:22.779593   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods
	I0203 11:09:22.779593   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:22.779593   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:22.779593   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:22.787485   12544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 11:09:22.793721   12544 system_pods.go:86] 17 kube-system pods found
	I0203 11:09:22.793721   12544 system_pods.go:89] "coredns-668d6bf9bc-5jzvf" [171e3213-b687-432a-b3a3-231392dddfaf] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "coredns-668d6bf9bc-r5pf5" [34df0b8e-1ae4-4e3e-a39f-9d9c505a25c4] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "etcd-ha-429000" [8462336e-5775-446f-99ed-d5a46d8f85b0] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "etcd-ha-429000-m02" [26a3c348-6476-41c8-b1f0-b2d86f3b77a2] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kindnet-d7lbp" [23d86f41-7e30-4da8-924f-4c6aafb9360c] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kindnet-fv8r6" [58d47479-d8ac-4a8a-b5d7-7fc71319598b] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kube-apiserver-ha-429000" [a77b61c0-ca5b-4bf0-a0df-a3f7465c7cfc] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kube-apiserver-ha-429000-m02" [e3df904b-ddb6-4c43-9bd8-c35136520494] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kube-controller-manager-ha-429000" [df6cfc76-d0b4-4461-aa2e-cd44ebaec04a] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kube-controller-manager-ha-429000-m02" [89e18813-ac30-4890-a036-b86f0a9a513f] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kube-proxy-2n5cz" [aa6ffe60-2b46-473c-b2c4-b45004c6aeeb] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kube-proxy-dhm6z" [a2f4caab-ad59-402c-b3c8-3da356385c89] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kube-scheduler-ha-429000" [997f2cf9-4a89-40cd-9d8b-fece398c4a10] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kube-scheduler-ha-429000-m02" [e619bf3e-cb81-41a0-bfa8-c9f6506a356e] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kube-vip-ha-429000" [4907d066-bd93-4786-a868-9f3bd0a51f4b] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kube-vip-ha-429000-m02" [a53c671d-cc58-4505-901b-fe00af1f8eaa] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "storage-provisioner" [9cea8ac0-e49e-4a9b-8e99-2da32218657c] Running
	I0203 11:09:22.793721   12544 system_pods.go:126] duration metric: took 207.7582ms to wait for k8s-apps to be running ...
	I0203 11:09:22.793721   12544 system_svc.go:44] waiting for kubelet service to be running ....
	I0203 11:09:22.801544   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:09:22.829400   12544 system_svc.go:56] duration metric: took 35.6783ms WaitForService to wait for kubelet
	I0203 11:09:22.829400   12544 kubeadm.go:582] duration metric: took 22.2159289s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 11:09:22.829400   12544 node_conditions.go:102] verifying NodePressure condition ...
	I0203 11:09:22.979912   12544 request.go:632] Waited for 150.51ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes
	I0203 11:09:22.980126   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes
	I0203 11:09:22.980126   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:22.980126   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:22.980126   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:22.991325   12544 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0203 11:09:22.992679   12544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 11:09:22.992795   12544 node_conditions.go:123] node cpu capacity is 2
	I0203 11:09:22.992866   12544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 11:09:22.992866   12544 node_conditions.go:123] node cpu capacity is 2
	I0203 11:09:22.992899   12544 node_conditions.go:105] duration metric: took 163.4965ms to run NodePressure ...
	I0203 11:09:22.992899   12544 start.go:241] waiting for startup goroutines ...
	I0203 11:09:22.992957   12544 start.go:255] writing updated cluster config ...
	I0203 11:09:22.996848   12544 out.go:201] 
	I0203 11:09:23.016629   12544 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:09:23.016864   12544 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\config.json ...
	I0203 11:09:23.023493   12544 out.go:177] * Starting "ha-429000-m03" control-plane node in "ha-429000" cluster
	I0203 11:09:23.025476   12544 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 11:09:23.025476   12544 cache.go:56] Caching tarball of preloaded images
	I0203 11:09:23.025476   12544 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 11:09:23.026470   12544 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0203 11:09:23.026470   12544 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\config.json ...
	I0203 11:09:23.036680   12544 start.go:360] acquireMachinesLock for ha-429000-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 11:09:23.036680   12544 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-429000-m03"
	I0203 11:09:23.037495   12544 start.go:93] Provisioning new machine with config: &{Name:ha-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-429000 Namespace:def
ault APIServerHAVIP:172.25.15.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.12.47 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.13.142 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio
:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 11:09:23.037526   12544 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0203 11:09:23.041814   12544 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0203 11:09:23.042595   12544 start.go:159] libmachine.API.Create for "ha-429000" (driver="hyperv")
	I0203 11:09:23.042595   12544 client.go:168] LocalClient.Create starting
	I0203 11:09:23.042781   12544 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0203 11:09:23.043221   12544 main.go:141] libmachine: Decoding PEM data...
	I0203 11:09:23.043221   12544 main.go:141] libmachine: Parsing certificate...
	I0203 11:09:23.043423   12544 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0203 11:09:23.043584   12544 main.go:141] libmachine: Decoding PEM data...
	I0203 11:09:23.043584   12544 main.go:141] libmachine: Parsing certificate...
	I0203 11:09:23.043584   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0203 11:09:24.825290   12544 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0203 11:09:24.825290   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:24.825290   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0203 11:09:26.465219   12544 main.go:141] libmachine: [stdout =====>] : False
	
	I0203 11:09:26.465478   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:26.465556   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0203 11:09:27.856261   12544 main.go:141] libmachine: [stdout =====>] : True
	
	I0203 11:09:27.856849   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:27.856947   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0203 11:09:31.267585   12544 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0203 11:09:31.267585   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:31.269513   12544 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0203 11:09:31.644314   12544 main.go:141] libmachine: Creating SSH key...
	I0203 11:09:31.905532   12544 main.go:141] libmachine: Creating VM...
	I0203 11:09:31.905532   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0203 11:09:34.614001   12544 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0203 11:09:34.614083   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:34.614158   12544 main.go:141] libmachine: Using switch "Default Switch"
	I0203 11:09:34.614246   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0203 11:09:36.267265   12544 main.go:141] libmachine: [stdout =====>] : True
	
	I0203 11:09:36.267265   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:36.267573   12544 main.go:141] libmachine: Creating VHD
	I0203 11:09:36.267573   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0203 11:09:39.930156   12544 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : CF8BF7F8-7682-4EB8-9A66-97DF1B7993F6
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0203 11:09:39.931037   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:39.931037   12544 main.go:141] libmachine: Writing magic tar header
	I0203 11:09:39.931037   12544 main.go:141] libmachine: Writing SSH key tar header
	I0203 11:09:39.943513   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0203 11:09:43.006544   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:09:43.006803   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:43.006803   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\disk.vhd' -SizeBytes 20000MB
	I0203 11:09:45.406729   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:09:45.406729   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:45.406820   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-429000-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0203 11:09:48.801462   12544 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-429000-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0203 11:09:48.801699   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:48.801803   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-429000-m03 -DynamicMemoryEnabled $false
	I0203 11:09:50.858092   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:09:50.858092   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:50.858899   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-429000-m03 -Count 2
	I0203 11:09:52.891961   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:09:52.892957   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:52.892957   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-429000-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\boot2docker.iso'
	I0203 11:09:55.284880   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:09:55.284880   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:55.284880   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-429000-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\disk.vhd'
	I0203 11:09:57.683436   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:09:57.683513   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:57.683513   12544 main.go:141] libmachine: Starting VM...
	I0203 11:09:57.683513   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-429000-m03
	I0203 11:10:00.518485   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:10:00.518485   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:00.518485   12544 main.go:141] libmachine: Waiting for host to start...
	I0203 11:10:00.518854   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:02.612782   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:02.612782   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:02.612782   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:10:04.936907   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:10:04.936907   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:05.937751   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:07.926762   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:07.927453   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:07.927453   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:10:10.215940   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:10:10.216737   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:11.217481   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:13.208664   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:13.209681   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:13.209788   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:10:15.571509   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:10:15.572234   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:16.573134   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:18.622142   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:18.622142   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:18.622498   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:10:20.956476   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:10:20.957345   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:21.958260   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:24.029618   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:24.029618   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:24.030475   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:10:26.460332   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:10:26.460839   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:26.460839   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:28.448820   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:28.449039   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:28.449039   12544 machine.go:93] provisionDockerMachine start ...
	I0203 11:10:28.449039   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:30.509179   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:30.509179   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:30.509628   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:10:32.922209   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:10:32.922209   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:32.926084   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:10:32.942328   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.0.10 22 <nil> <nil>}
	I0203 11:10:32.942485   12544 main.go:141] libmachine: About to run SSH command:
	hostname
	I0203 11:10:33.080805   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0203 11:10:33.080805   12544 buildroot.go:166] provisioning hostname "ha-429000-m03"
	I0203 11:10:33.080805   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:35.060475   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:35.060475   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:35.060846   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:10:37.496864   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:10:37.496943   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:37.501226   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:10:37.501851   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.0.10 22 <nil> <nil>}
	I0203 11:10:37.501851   12544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-429000-m03 && echo "ha-429000-m03" | sudo tee /etc/hostname
	I0203 11:10:37.663829   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-429000-m03
	
	I0203 11:10:37.663829   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:39.663803   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:39.663893   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:39.663965   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:10:42.059871   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:10:42.060507   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:42.066805   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:10:42.066805   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.0.10 22 <nil> <nil>}
	I0203 11:10:42.066805   12544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-429000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-429000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-429000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 11:10:42.208469   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 11:10:42.208469   12544 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0203 11:10:42.208469   12544 buildroot.go:174] setting up certificates
	I0203 11:10:42.208469   12544 provision.go:84] configureAuth start
	I0203 11:10:42.208469   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:44.162647   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:44.163302   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:44.163392   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:10:46.544354   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:10:46.544354   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:46.544354   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:48.548493   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:48.548493   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:48.548567   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:10:50.937084   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:10:50.937084   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:50.937392   12544 provision.go:143] copyHostCerts
	I0203 11:10:50.937392   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0203 11:10:50.937392   12544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0203 11:10:50.937392   12544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0203 11:10:50.938082   12544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0203 11:10:50.938799   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0203 11:10:50.938799   12544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0203 11:10:50.938799   12544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0203 11:10:50.939474   12544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0203 11:10:50.940079   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0203 11:10:50.940079   12544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0203 11:10:50.940079   12544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0203 11:10:50.940786   12544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0203 11:10:50.941382   12544 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-429000-m03 san=[127.0.0.1 172.25.0.10 ha-429000-m03 localhost minikube]
	I0203 11:10:51.165975   12544 provision.go:177] copyRemoteCerts
	I0203 11:10:51.175173   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 11:10:51.175173   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:53.173291   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:53.173447   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:53.173501   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:10:55.570280   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:10:55.570719   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:55.571021   12544 sshutil.go:53] new ssh client: &{IP:172.25.0.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\id_rsa Username:docker}
	I0203 11:10:55.671119   12544 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4958955s)
	I0203 11:10:55.671119   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0203 11:10:55.671504   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0203 11:10:55.718915   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0203 11:10:55.719276   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0203 11:10:55.764976   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0203 11:10:55.764976   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0203 11:10:55.811912   12544 provision.go:87] duration metric: took 13.6032877s to configureAuth
	I0203 11:10:55.811975   12544 buildroot.go:189] setting minikube options for container-runtime
	I0203 11:10:55.812552   12544 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:10:55.812629   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:57.745829   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:57.745829   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:57.745829   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:00.151002   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:11:00.151053   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:00.154873   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:11:00.155135   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.0.10 22 <nil> <nil>}
	I0203 11:11:00.155135   12544 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 11:11:00.292396   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0203 11:11:00.292496   12544 buildroot.go:70] root file system type: tmpfs
	I0203 11:11:00.292648   12544 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 11:11:00.292730   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:11:02.280441   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:02.280566   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:02.280655   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:04.628377   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:11:04.628377   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:04.632783   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:11:04.633307   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.0.10 22 <nil> <nil>}
	I0203 11:11:04.633396   12544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.12.47"
	Environment="NO_PROXY=172.25.12.47,172.25.13.142"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 11:11:04.800327   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.12.47
	Environment=NO_PROXY=172.25.12.47,172.25.13.142
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 11:11:04.800440   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:11:06.825890   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:06.825890   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:06.826760   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:09.224618   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:11:09.224618   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:09.228837   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:11:09.228837   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.0.10 22 <nil> <nil>}
	I0203 11:11:09.228837   12544 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 11:11:11.430762   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0203 11:11:11.430762   12544 machine.go:96] duration metric: took 42.9812331s to provisionDockerMachine
	I0203 11:11:11.430762   12544 client.go:171] duration metric: took 1m48.3869323s to LocalClient.Create
	I0203 11:11:11.430762   12544 start.go:167] duration metric: took 1m48.3869323s to libmachine.API.Create "ha-429000"
	I0203 11:11:11.430762   12544 start.go:293] postStartSetup for "ha-429000-m03" (driver="hyperv")
	I0203 11:11:11.431296   12544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 11:11:11.439364   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 11:11:11.439364   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:11:13.411191   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:13.411290   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:13.411290   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:15.793517   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:11:15.793517   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:15.793517   12544 sshutil.go:53] new ssh client: &{IP:172.25.0.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\id_rsa Username:docker}
	I0203 11:11:15.896868   12544 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4573592s)
	I0203 11:11:15.904801   12544 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 11:11:15.912528   12544 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 11:11:15.912617   12544 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0203 11:11:15.912645   12544 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0203 11:11:15.913720   12544 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> 54522.pem in /etc/ssl/certs
	I0203 11:11:15.913720   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /etc/ssl/certs/54522.pem
	I0203 11:11:15.922180   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 11:11:15.940792   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /etc/ssl/certs/54522.pem (1708 bytes)
	I0203 11:11:15.993040   12544 start.go:296] duration metric: took 4.5615949s for postStartSetup
	I0203 11:11:15.995230   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:11:17.980452   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:17.980452   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:17.980631   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:20.378636   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:11:20.378790   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:20.379023   12544 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\config.json ...
	I0203 11:11:20.380993   12544 start.go:128] duration metric: took 1m57.342129s to createHost
	I0203 11:11:20.381126   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:11:22.332478   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:22.332478   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:22.332478   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:24.725597   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:11:24.725597   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:24.733207   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:11:24.733728   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.0.10 22 <nil> <nil>}
	I0203 11:11:24.733728   12544 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0203 11:11:24.864680   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738581084.879791398
	
	I0203 11:11:24.864680   12544 fix.go:216] guest clock: 1738581084.879791398
	I0203 11:11:24.864680   12544 fix.go:229] Guest: 2025-02-03 11:11:24.879791398 +0000 UTC Remote: 2025-02-03 11:11:20.3810596 +0000 UTC m=+523.839750701 (delta=4.498731798s)
	I0203 11:11:24.865278   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:11:26.903708   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:26.904409   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:26.904462   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:29.265049   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:11:29.265049   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:29.269719   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:11:29.270272   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.0.10 22 <nil> <nil>}
	I0203 11:11:29.270349   12544 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1738581084
	I0203 11:11:29.408982   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb  3 11:11:24 UTC 2025
	
	I0203 11:11:29.408982   12544 fix.go:236] clock set: Mon Feb  3 11:11:24 UTC 2025
	 (err=<nil>)
	I0203 11:11:29.409047   12544 start.go:83] releasing machines lock for "ha-429000-m03", held for 2m6.3709259s
	I0203 11:11:29.409047   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:11:31.393155   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:31.393155   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:31.393155   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:33.771458   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:11:33.771553   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:33.774457   12544 out.go:177] * Found network options:
	I0203 11:11:33.776912   12544 out.go:177]   - NO_PROXY=172.25.12.47,172.25.13.142
	W0203 11:11:33.778688   12544 proxy.go:119] fail to check proxy env: Error ip not in block
	W0203 11:11:33.778688   12544 proxy.go:119] fail to check proxy env: Error ip not in block
	I0203 11:11:33.781199   12544 out.go:177]   - NO_PROXY=172.25.12.47,172.25.13.142
	W0203 11:11:33.784975   12544 proxy.go:119] fail to check proxy env: Error ip not in block
	W0203 11:11:33.784975   12544 proxy.go:119] fail to check proxy env: Error ip not in block
	W0203 11:11:33.785960   12544 proxy.go:119] fail to check proxy env: Error ip not in block
	W0203 11:11:33.785960   12544 proxy.go:119] fail to check proxy env: Error ip not in block
	I0203 11:11:33.788320   12544 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0203 11:11:33.788320   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:11:33.795079   12544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0203 11:11:33.795079   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:11:35.847884   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:35.848155   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:35.848318   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:35.848801   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:35.848801   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:35.849051   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:38.333262   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:11:38.333262   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:38.333262   12544 sshutil.go:53] new ssh client: &{IP:172.25.0.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\id_rsa Username:docker}
	I0203 11:11:38.357258   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:11:38.357258   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:38.357258   12544 sshutil.go:53] new ssh client: &{IP:172.25.0.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\id_rsa Username:docker}
	I0203 11:11:38.427442   12544 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.6381021s)
	W0203 11:11:38.427442   12544 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0203 11:11:38.446040   12544 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.6508082s)
	W0203 11:11:38.446129   12544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 11:11:38.456426   12544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 11:11:38.485488   12544 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0203 11:11:38.485488   12544 start.go:495] detecting cgroup driver to use...
	I0203 11:11:38.485488   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 11:11:38.528687   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0203 11:11:38.542781   12544 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0203 11:11:38.542878   12544 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0203 11:11:38.558633   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0203 11:11:38.582061   12544 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 11:11:38.591932   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0203 11:11:38.620258   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 11:11:38.647297   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 11:11:38.675785   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 11:11:38.702790   12544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 11:11:38.731172   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 11:11:38.759644   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0203 11:11:38.788222   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0203 11:11:38.814231   12544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 11:11:38.832509   12544 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 11:11:38.840496   12544 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0203 11:11:38.868270   12544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 11:11:38.892429   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:11:39.085127   12544 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 11:11:39.118803   12544 start.go:495] detecting cgroup driver to use...
	I0203 11:11:39.126327   12544 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 11:11:39.158845   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 11:11:39.187199   12544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 11:11:39.218421   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 11:11:39.250023   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 11:11:39.281732   12544 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0203 11:11:39.342180   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 11:11:39.366267   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 11:11:39.406566   12544 ssh_runner.go:195] Run: which cri-dockerd
	I0203 11:11:39.420442   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0203 11:11:39.436907   12544 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0203 11:11:39.476269   12544 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 11:11:39.668765   12544 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 11:11:39.848458   12544 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 11:11:39.849451   12544 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0203 11:11:39.888969   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:11:40.082254   12544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 11:11:42.668594   12544 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5863098s)
	I0203 11:11:42.678853   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0203 11:11:42.710528   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 11:11:42.741505   12544 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0203 11:11:42.931042   12544 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 11:11:43.121490   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:11:43.301125   12544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0203 11:11:43.341175   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 11:11:43.373242   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:11:43.566660   12544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0203 11:11:43.671244   12544 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0203 11:11:43.680540   12544 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0203 11:11:43.692621   12544 start.go:563] Will wait 60s for crictl version
	I0203 11:11:43.700499   12544 ssh_runner.go:195] Run: which crictl
	I0203 11:11:43.715098   12544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 11:11:43.767299   12544 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0203 11:11:43.774365   12544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 11:11:43.814332   12544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 11:11:43.851343   12544 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0203 11:11:43.853834   12544 out.go:177]   - env NO_PROXY=172.25.12.47
	I0203 11:11:43.857383   12544 out.go:177]   - env NO_PROXY=172.25.12.47,172.25.13.142
	I0203 11:11:43.859514   12544 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0203 11:11:43.863617   12544 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0203 11:11:43.863617   12544 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0203 11:11:43.863617   12544 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0203 11:11:43.863617   12544 ip.go:211] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:37:32:ac Flags:up|broadcast|multicast|running}
	I0203 11:11:43.866837   12544 ip.go:214] interface addr: fe80::c77d:5c4b:3bd9:9577/64
	I0203 11:11:43.866837   12544 ip.go:214] interface addr: 172.25.0.1/20
	I0203 11:11:43.873400   12544 ssh_runner.go:195] Run: grep 172.25.0.1	host.minikube.internal$ /etc/hosts
	I0203 11:11:43.880316   12544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:11:43.905418   12544 mustload.go:65] Loading cluster: ha-429000
	I0203 11:11:43.905925   12544 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:11:43.906482   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:11:45.875968   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:45.876041   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:45.876041   12544 host.go:66] Checking if "ha-429000" exists ...
	I0203 11:11:45.876532   12544 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000 for IP: 172.25.0.10
	I0203 11:11:45.876532   12544 certs.go:194] generating shared ca certs ...
	I0203 11:11:45.876532   12544 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:11:45.877281   12544 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0203 11:11:45.877440   12544 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0203 11:11:45.877440   12544 certs.go:256] generating profile certs ...
	I0203 11:11:45.878213   12544 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\client.key
	I0203 11:11:45.878213   12544 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.a3f7526a
	I0203 11:11:45.878213   12544 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.a3f7526a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.12.47 172.25.13.142 172.25.0.10 172.25.15.254]
	I0203 11:11:45.988705   12544 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.a3f7526a ...
	I0203 11:11:45.988705   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.a3f7526a: {Name:mk1be027ea55560d27ff8cb8e301fd81e5b5b837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:11:45.989687   12544 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.a3f7526a ...
	I0203 11:11:45.989687   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.a3f7526a: {Name:mk37b37b896fde1ac629a06ce6b4f6563adaa9dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:11:45.990196   12544 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.a3f7526a -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt
	I0203 11:11:46.007610   12544 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.a3f7526a -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key
	I0203 11:11:46.008617   12544 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.key
	I0203 11:11:46.008617   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0203 11:11:46.008617   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0203 11:11:46.008617   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0203 11:11:46.008617   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0203 11:11:46.008617   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0203 11:11:46.008617   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0203 11:11:46.009623   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0203 11:11:46.009623   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0203 11:11:46.009623   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem (1338 bytes)
	W0203 11:11:46.010610   12544 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452_empty.pem, impossibly tiny 0 bytes
	I0203 11:11:46.010610   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0203 11:11:46.010610   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0203 11:11:46.010610   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0203 11:11:46.010610   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0203 11:11:46.011639   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem (1708 bytes)
	I0203 11:11:46.011836   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem -> /usr/share/ca-certificates/5452.pem
	I0203 11:11:46.011989   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /usr/share/ca-certificates/54522.pem
	I0203 11:11:46.012066   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:11:46.012231   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:11:48.004707   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:48.004707   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:48.004783   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:50.356353   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:11:50.356353   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:50.357512   12544 sshutil.go:53] new ssh client: &{IP:172.25.12.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\id_rsa Username:docker}
	I0203 11:11:50.454373   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0203 11:11:50.462142   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0203 11:11:50.493099   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0203 11:11:50.499380   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0203 11:11:50.528039   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0203 11:11:50.535100   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0203 11:11:50.563056   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0203 11:11:50.569484   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0203 11:11:50.595729   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0203 11:11:50.603095   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0203 11:11:50.629965   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0203 11:11:50.637077   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0203 11:11:50.657885   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 11:11:50.706110   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0203 11:11:50.752374   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 11:11:50.798131   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0203 11:11:50.850022   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0203 11:11:50.894878   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0203 11:11:50.943862   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 11:11:50.990387   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0203 11:11:51.034907   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem --> /usr/share/ca-certificates/5452.pem (1338 bytes)
	I0203 11:11:51.079174   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /usr/share/ca-certificates/54522.pem (1708 bytes)
	I0203 11:11:51.123129   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 11:11:51.170977   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0203 11:11:51.202841   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0203 11:11:51.235211   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0203 11:11:51.265247   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0203 11:11:51.295918   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0203 11:11:51.326702   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0203 11:11:51.357616   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0203 11:11:51.397694   12544 ssh_runner.go:195] Run: openssl version
	I0203 11:11:51.414533   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5452.pem && ln -fs /usr/share/ca-certificates/5452.pem /etc/ssl/certs/5452.pem"
	I0203 11:11:51.441640   12544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5452.pem
	I0203 11:11:51.449984   12544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:45 /usr/share/ca-certificates/5452.pem
	I0203 11:11:51.457631   12544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5452.pem
	I0203 11:11:51.473803   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5452.pem /etc/ssl/certs/51391683.0"
	I0203 11:11:51.502188   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54522.pem && ln -fs /usr/share/ca-certificates/54522.pem /etc/ssl/certs/54522.pem"
	I0203 11:11:51.528913   12544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54522.pem
	I0203 11:11:51.535388   12544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:45 /usr/share/ca-certificates/54522.pem
	I0203 11:11:51.542698   12544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54522.pem
	I0203 11:11:51.560653   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/54522.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 11:11:51.587691   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 11:11:51.615327   12544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:11:51.621985   12544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:11:51.630578   12544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:11:51.647306   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 11:11:51.676810   12544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 11:11:51.686256   12544 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0203 11:11:51.686256   12544 kubeadm.go:934] updating node {m03 172.25.0.10 8443 v1.32.1 docker true true} ...
	I0203 11:11:51.686256   12544 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-429000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.0.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-429000 Namespace:default APIServerHAVIP:172.25.15.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0203 11:11:51.686256   12544 kube-vip.go:115] generating kube-vip config ...
	I0203 11:11:51.695053   12544 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0203 11:11:51.723318   12544 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0203 11:11:51.723318   12544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.15.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0203 11:11:51.731211   12544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0203 11:11:51.754103   12544 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.1': No such file or directory
	
	Initiating transfer...
	I0203 11:11:51.761708   12544 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.1
	I0203 11:11:51.780644   12544 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
	I0203 11:11:51.780708   12544 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet.sha256
	I0203 11:11:51.780644   12544 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm.sha256
	I0203 11:11:51.780790   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl -> /var/lib/minikube/binaries/v1.32.1/kubectl
	I0203 11:11:51.780790   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm -> /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0203 11:11:51.791364   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:11:51.791364   12544 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0203 11:11:51.793347   12544 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl
	I0203 11:11:51.810553   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet -> /var/lib/minikube/binaries/v1.32.1/kubelet
	I0203 11:11:51.810553   12544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubeadm': No such file or directory
	I0203 11:11:51.810553   12544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubectl': No such file or directory
	I0203 11:11:51.811551   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl --> /var/lib/minikube/binaries/v1.32.1/kubectl (57323672 bytes)
	I0203 11:11:51.811551   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm --> /var/lib/minikube/binaries/v1.32.1/kubeadm (70942872 bytes)
	I0203 11:11:51.821243   12544 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet
	I0203 11:11:51.888584   12544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubelet': No such file or directory
	I0203 11:11:51.888777   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet --> /var/lib/minikube/binaries/v1.32.1/kubelet (77398276 bytes)
	I0203 11:11:52.959826   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0203 11:11:52.978273   12544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0203 11:11:53.010417   12544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 11:11:53.043140   12544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0203 11:11:53.084942   12544 ssh_runner.go:195] Run: grep 172.25.15.254	control-plane.minikube.internal$ /etc/hosts
	I0203 11:11:53.091003   12544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.15.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:11:53.121543   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:11:53.305437   12544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:11:53.336247   12544 host.go:66] Checking if "ha-429000" exists ...
	I0203 11:11:53.336850   12544 start.go:317] joinCluster: &{Name:ha-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-429000 Namespace:default APIServerHAVIP:172.
25.15.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.12.47 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.13.142 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.25.0.10 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-
provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:11:53.336850   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0203 11:11:53.336850   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:11:55.329735   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:55.329735   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:55.329735   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:57.692163   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:11:57.692163   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:57.693333   12544 sshutil.go:53] new ssh client: &{IP:172.25.12.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\id_rsa Username:docker}
	I0203 11:11:57.886903   12544 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.550001s)
	I0203 11:11:57.886989   12544 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.25.0.10 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 11:11:57.887141   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token g2pn61.gbq976xywc4o46as --discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-429000-m03 --control-plane --apiserver-advertise-address=172.25.0.10 --apiserver-bind-port=8443"
	I0203 11:12:39.040256   12544 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token g2pn61.gbq976xywc4o46as --discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-429000-m03 --control-plane --apiserver-advertise-address=172.25.0.10 --apiserver-bind-port=8443": (41.1526455s)
	I0203 11:12:39.040256   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0203 11:12:39.875986   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-429000-m03 minikube.k8s.io/updated_at=2025_02_03T11_12_39_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d minikube.k8s.io/name=ha-429000 minikube.k8s.io/primary=false
	I0203 11:12:40.066803   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-429000-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0203 11:12:40.311564   12544 start.go:319] duration metric: took 46.9741786s to joinCluster
	I0203 11:12:40.311658   12544 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.25.0.10 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 11:12:40.312405   12544 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:12:40.314860   12544 out.go:177] * Verifying Kubernetes components...
	I0203 11:12:40.326973   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:12:40.741212   12544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:12:40.801220   12544 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 11:12:40.802217   12544 kapi.go:59] client config for ha-429000: &rest.Config{Host:"https://172.25.15.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-429000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-429000\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x219e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0203 11:12:40.802217   12544 kubeadm.go:483] Overriding stale ClientConfig host https://172.25.15.254:8443 with https://172.25.12.47:8443
	I0203 11:12:40.802217   12544 node_ready.go:35] waiting up to 6m0s for node "ha-429000-m03" to be "Ready" ...
	I0203 11:12:40.803214   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:40.803214   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:40.803214   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:40.803214   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:40.818551   12544 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0203 11:12:41.304383   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:41.304383   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:41.304383   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:41.304383   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:41.309699   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:41.803415   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:41.803415   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:41.803415   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:41.803415   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:41.809123   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:42.303355   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:42.303355   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:42.303355   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:42.303355   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:42.308568   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:42.803569   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:42.803569   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:42.803569   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:42.803569   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:42.817614   12544 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0203 11:12:42.818557   12544 node_ready.go:53] node "ha-429000-m03" has status "Ready":"False"
	I0203 11:12:43.303540   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:43.303540   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:43.303540   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:43.303540   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:43.326433   12544 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0203 11:12:43.803930   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:43.803930   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:43.803930   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:43.803930   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:43.808716   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:12:44.303778   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:44.303778   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:44.303778   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:44.303778   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:44.309928   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:12:44.803763   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:44.803763   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:44.803763   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:44.803763   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:44.817986   12544 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0203 11:12:44.818768   12544 node_ready.go:53] node "ha-429000-m03" has status "Ready":"False"
	I0203 11:12:45.303563   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:45.304040   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:45.304090   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:45.304090   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:45.312045   12544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 11:12:45.804070   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:45.804070   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:45.804070   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:45.804070   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:45.809608   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:46.303794   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:46.303972   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:46.303972   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:46.303972   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:46.312372   12544 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0203 11:12:46.803905   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:46.803905   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:46.803905   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:46.803905   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:46.809375   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:47.304447   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:47.304447   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:47.304447   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:47.304447   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:47.309172   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:12:47.309926   12544 node_ready.go:53] node "ha-429000-m03" has status "Ready":"False"
	I0203 11:12:47.803564   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:47.803564   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:47.803564   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:47.803564   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:47.808932   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:48.304232   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:48.304232   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:48.304232   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:48.304232   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:48.309626   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:48.803998   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:48.803998   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:48.803998   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:48.803998   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:48.809236   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:49.303933   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:49.303933   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:49.303933   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:49.303933   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:49.309008   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:49.804097   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:49.804190   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:49.804190   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:49.804190   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:49.814160   12544 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0203 11:12:49.814704   12544 node_ready.go:53] node "ha-429000-m03" has status "Ready":"False"
	I0203 11:12:50.304472   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:50.304540   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:50.304540   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:50.304540   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:50.309793   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:50.803704   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:50.803704   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:50.803704   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:50.803704   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:50.808899   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:51.303673   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:51.303673   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:51.303673   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:51.303673   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:51.309059   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:51.803824   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:51.803824   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:51.803824   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:51.803824   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:51.808690   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:12:52.303348   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:52.303348   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:52.303348   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:52.303348   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:52.308777   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:52.309411   12544 node_ready.go:53] node "ha-429000-m03" has status "Ready":"False"
	I0203 11:12:52.804217   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:52.804217   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:52.804217   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:52.804217   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:52.809727   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:53.304122   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:53.304122   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:53.304122   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:53.304122   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:53.307906   12544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 11:12:53.803839   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:53.803839   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:53.803839   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:53.803839   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:53.808607   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:12:54.304264   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:54.304264   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:54.304264   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:54.304264   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:54.310236   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:54.311000   12544 node_ready.go:53] node "ha-429000-m03" has status "Ready":"False"
	I0203 11:12:54.803813   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:54.803813   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:54.803813   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:54.803813   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:54.815141   12544 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0203 11:12:55.304120   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:55.304120   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:55.304120   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:55.304120   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:55.309119   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:12:55.803844   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:55.803844   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:55.803844   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:55.803844   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:55.809119   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:56.303992   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:56.303992   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:56.303992   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:56.303992   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:56.311805   12544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 11:12:56.312235   12544 node_ready.go:53] node "ha-429000-m03" has status "Ready":"False"
	I0203 11:12:56.804145   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:56.804145   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:56.804145   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:56.804145   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:56.809681   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:57.303756   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:57.303756   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:57.303756   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:57.303756   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:57.312008   12544 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0203 11:12:57.803822   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:57.803822   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:57.803822   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:57.803822   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:57.809325   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:58.304400   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:58.304400   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:58.304400   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:58.304400   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:58.310029   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:58.803498   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:58.803498   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:58.803498   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:58.803498   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:58.808587   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:58.809347   12544 node_ready.go:49] node "ha-429000-m03" has status "Ready":"True"
	I0203 11:12:58.809410   12544 node_ready.go:38] duration metric: took 18.0059907s for node "ha-429000-m03" to be "Ready" ...
	I0203 11:12:58.809410   12544 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 11:12:58.809540   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods
	I0203 11:12:58.809685   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:58.809685   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:58.809685   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:58.841693   12544 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0203 11:12:58.850467   12544 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-5jzvf" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:58.851042   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-5jzvf
	I0203 11:12:58.851124   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:58.851124   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:58.851124   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:58.854120   12544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 11:12:58.855116   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:12:58.855116   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:58.855734   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:58.855734   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:58.859866   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:12:58.860457   12544 pod_ready.go:93] pod "coredns-668d6bf9bc-5jzvf" in "kube-system" namespace has status "Ready":"True"
	I0203 11:12:58.860542   12544 pod_ready.go:82] duration metric: took 10.075ms for pod "coredns-668d6bf9bc-5jzvf" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:58.860542   12544 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-r5pf5" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:58.860674   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-r5pf5
	I0203 11:12:58.860674   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:58.860674   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:58.860674   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:58.864603   12544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 11:12:58.865373   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:12:58.865373   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:58.865373   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:58.865373   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:58.869479   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:12:58.870270   12544 pod_ready.go:93] pod "coredns-668d6bf9bc-r5pf5" in "kube-system" namespace has status "Ready":"True"
	I0203 11:12:58.870270   12544 pod_ready.go:82] duration metric: took 9.7283ms for pod "coredns-668d6bf9bc-r5pf5" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:58.870329   12544 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:58.870417   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-429000
	I0203 11:12:58.870417   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:58.870417   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:58.870417   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:58.874876   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:12:58.875963   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:12:58.875963   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:58.875963   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:58.876021   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:58.882228   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:12:58.882791   12544 pod_ready.go:93] pod "etcd-ha-429000" in "kube-system" namespace has status "Ready":"True"
	I0203 11:12:58.882791   12544 pod_ready.go:82] duration metric: took 12.4615ms for pod "etcd-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:58.882852   12544 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:58.882920   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-429000-m02
	I0203 11:12:58.882920   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:58.882920   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:58.882920   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:58.894495   12544 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0203 11:12:58.895072   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:12:58.895072   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:58.895072   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:58.895072   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:58.899816   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:12:58.900654   12544 pod_ready.go:93] pod "etcd-ha-429000-m02" in "kube-system" namespace has status "Ready":"True"
	I0203 11:12:58.900654   12544 pod_ready.go:82] duration metric: took 17.8017ms for pod "etcd-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:58.900654   12544 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-429000-m03" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:59.004530   12544 request.go:632] Waited for 103.8753ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-429000-m03
	I0203 11:12:59.004530   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-429000-m03
	I0203 11:12:59.004530   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:59.004530   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:59.004530   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:59.008604   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:12:59.203869   12544 request.go:632] Waited for 194.6172ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:59.203869   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:59.203869   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:59.203869   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:59.203869   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:59.209854   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:59.210482   12544 pod_ready.go:93] pod "etcd-ha-429000-m03" in "kube-system" namespace has status "Ready":"True"
	I0203 11:12:59.210482   12544 pod_ready.go:82] duration metric: took 309.8251ms for pod "etcd-ha-429000-m03" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:59.210482   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:59.404119   12544 request.go:632] Waited for 193.6342ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-429000
	I0203 11:12:59.404119   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-429000
	I0203 11:12:59.404119   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:59.404119   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:59.404119   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:59.409342   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:59.604572   12544 request.go:632] Waited for 194.6357ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:12:59.604897   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:12:59.604968   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:59.605035   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:59.605053   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:59.612484   12544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 11:12:59.613084   12544 pod_ready.go:93] pod "kube-apiserver-ha-429000" in "kube-system" namespace has status "Ready":"True"
	I0203 11:12:59.613191   12544 pod_ready.go:82] duration metric: took 402.7035ms for pod "kube-apiserver-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:59.613191   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:59.803932   12544 request.go:632] Waited for 190.7398ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-429000-m02
	I0203 11:12:59.803932   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-429000-m02
	I0203 11:12:59.803932   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:59.803932   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:59.803932   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:59.808812   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:13:00.004001   12544 request.go:632] Waited for 193.9864ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:13:00.004001   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:13:00.004001   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:00.004001   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:00.004001   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:00.009450   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:13:00.009618   12544 pod_ready.go:93] pod "kube-apiserver-ha-429000-m02" in "kube-system" namespace has status "Ready":"True"
	I0203 11:13:00.010144   12544 pod_ready.go:82] duration metric: took 396.9493ms for pod "kube-apiserver-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:00.010144   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-429000-m03" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:00.203523   12544 request.go:632] Waited for 193.2804ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-429000-m03
	I0203 11:13:00.203523   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-429000-m03
	I0203 11:13:00.203523   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:00.203523   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:00.203523   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:00.208875   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:13:00.403799   12544 request.go:632] Waited for 193.0197ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:13:00.403799   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:13:00.403799   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:00.404287   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:00.404287   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:00.409493   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:13:00.410154   12544 pod_ready.go:93] pod "kube-apiserver-ha-429000-m03" in "kube-system" namespace has status "Ready":"True"
	I0203 11:13:00.410218   12544 pod_ready.go:82] duration metric: took 400.0688ms for pod "kube-apiserver-ha-429000-m03" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:00.410218   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:00.604493   12544 request.go:632] Waited for 194.2103ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-429000
	I0203 11:13:00.604753   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-429000
	I0203 11:13:00.604753   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:00.604753   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:00.604753   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:00.612186   12544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 11:13:00.804326   12544 request.go:632] Waited for 191.1461ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:13:00.804326   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:13:00.804326   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:00.804326   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:00.804326   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:00.809223   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:13:00.810204   12544 pod_ready.go:93] pod "kube-controller-manager-ha-429000" in "kube-system" namespace has status "Ready":"True"
	I0203 11:13:00.810262   12544 pod_ready.go:82] duration metric: took 400.0394ms for pod "kube-controller-manager-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:00.810262   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:01.003830   12544 request.go:632] Waited for 193.4936ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-429000-m02
	I0203 11:13:01.003830   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-429000-m02
	I0203 11:13:01.003830   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:01.003830   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:01.003830   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:01.008814   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:13:01.203959   12544 request.go:632] Waited for 193.6638ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:13:01.204178   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:13:01.204178   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:01.204178   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:01.204178   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:01.208879   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:13:01.209614   12544 pod_ready.go:93] pod "kube-controller-manager-ha-429000-m02" in "kube-system" namespace has status "Ready":"True"
	I0203 11:13:01.209680   12544 pod_ready.go:82] duration metric: took 399.414ms for pod "kube-controller-manager-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:01.209680   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-429000-m03" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:01.404501   12544 request.go:632] Waited for 194.7278ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-429000-m03
	I0203 11:13:01.404758   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-429000-m03
	I0203 11:13:01.404758   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:01.404758   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:01.404758   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:01.412103   12544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 11:13:01.604712   12544 request.go:632] Waited for 191.5876ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:13:01.604712   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:13:01.604712   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:01.604712   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:01.604712   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:01.608911   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:13:01.610302   12544 pod_ready.go:93] pod "kube-controller-manager-ha-429000-m03" in "kube-system" namespace has status "Ready":"True"
	I0203 11:13:01.610404   12544 pod_ready.go:82] duration metric: took 400.7189ms for pod "kube-controller-manager-ha-429000-m03" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:01.610404   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2n5cz" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:01.804232   12544 request.go:632] Waited for 193.7228ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2n5cz
	I0203 11:13:01.804232   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2n5cz
	I0203 11:13:01.804232   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:01.804232   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:01.804232   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:01.809242   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:13:02.004365   12544 request.go:632] Waited for 194.1121ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:13:02.004365   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:13:02.004365   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:02.004365   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:02.004365   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:02.009884   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:13:02.010185   12544 pod_ready.go:93] pod "kube-proxy-2n5cz" in "kube-system" namespace has status "Ready":"True"
	I0203 11:13:02.010185   12544 pod_ready.go:82] duration metric: took 399.7771ms for pod "kube-proxy-2n5cz" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:02.010185   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dhm6z" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:02.204520   12544 request.go:632] Waited for 194.3326ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dhm6z
	I0203 11:13:02.204520   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dhm6z
	I0203 11:13:02.204520   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:02.204520   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:02.204520   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:02.209337   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:13:02.404112   12544 request.go:632] Waited for 193.4593ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:13:02.404112   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:13:02.404112   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:02.404112   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:02.404112   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:02.408239   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:13:02.409842   12544 pod_ready.go:93] pod "kube-proxy-dhm6z" in "kube-system" namespace has status "Ready":"True"
	I0203 11:13:02.409842   12544 pod_ready.go:82] duration metric: took 399.6523ms for pod "kube-proxy-dhm6z" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:02.409842   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m9nhx" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:02.604328   12544 request.go:632] Waited for 194.3267ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m9nhx
	I0203 11:13:02.604635   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m9nhx
	I0203 11:13:02.604635   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:02.604635   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:02.604635   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:02.610172   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:13:02.803866   12544 request.go:632] Waited for 192.7497ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:13:02.803866   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:13:02.803866   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:02.803866   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:02.803866   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:02.812788   12544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 11:13:02.812788   12544 pod_ready.go:93] pod "kube-proxy-m9nhx" in "kube-system" namespace has status "Ready":"True"
	I0203 11:13:02.812788   12544 pod_ready.go:82] duration metric: took 402.9409ms for pod "kube-proxy-m9nhx" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:02.812788   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:03.004672   12544 request.go:632] Waited for 191.8826ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-429000
	I0203 11:13:03.004672   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-429000
	I0203 11:13:03.004672   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:03.005042   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:03.005042   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:03.010888   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:13:03.204523   12544 request.go:632] Waited for 192.7176ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:13:03.204523   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:13:03.204523   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:03.204523   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:03.204523   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:03.209585   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:13:03.210850   12544 pod_ready.go:93] pod "kube-scheduler-ha-429000" in "kube-system" namespace has status "Ready":"True"
	I0203 11:13:03.210959   12544 pod_ready.go:82] duration metric: took 398.1665ms for pod "kube-scheduler-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:03.210959   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:03.404602   12544 request.go:632] Waited for 193.5661ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-429000-m02
	I0203 11:13:03.404602   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-429000-m02
	I0203 11:13:03.404602   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:03.404602   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:03.404602   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:03.410175   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:13:03.604330   12544 request.go:632] Waited for 193.0786ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:13:03.604330   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:13:03.604330   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:03.604330   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:03.604330   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:03.618515   12544 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0203 11:13:03.619010   12544 pod_ready.go:93] pod "kube-scheduler-ha-429000-m02" in "kube-system" namespace has status "Ready":"True"
	I0203 11:13:03.619010   12544 pod_ready.go:82] duration metric: took 408.0464ms for pod "kube-scheduler-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:03.619010   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-429000-m03" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:03.804767   12544 request.go:632] Waited for 185.7548ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-429000-m03
	I0203 11:13:03.805089   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-429000-m03
	I0203 11:13:03.805175   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:03.805175   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:03.805203   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:03.811686   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:13:04.003800   12544 request.go:632] Waited for 191.1085ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:13:04.003800   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:13:04.003800   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:04.003800   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:04.003800   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:04.016471   12544 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0203 11:13:04.017503   12544 pod_ready.go:93] pod "kube-scheduler-ha-429000-m03" in "kube-system" namespace has status "Ready":"True"
	I0203 11:13:04.017577   12544 pod_ready.go:82] duration metric: took 398.5629ms for pod "kube-scheduler-ha-429000-m03" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:04.017639   12544 pod_ready.go:39] duration metric: took 5.2081696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 11:13:04.017639   12544 api_server.go:52] waiting for apiserver process to appear ...
	I0203 11:13:04.025943   12544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:13:04.051863   12544 api_server.go:72] duration metric: took 23.7399334s to wait for apiserver process to appear ...
	I0203 11:13:04.051863   12544 api_server.go:88] waiting for apiserver healthz status ...
	I0203 11:13:04.051863   12544 api_server.go:253] Checking apiserver healthz at https://172.25.12.47:8443/healthz ...
	I0203 11:13:04.059728   12544 api_server.go:279] https://172.25.12.47:8443/healthz returned 200:
	ok
	I0203 11:13:04.059843   12544 round_trippers.go:463] GET https://172.25.12.47:8443/version
	I0203 11:13:04.059900   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:04.059900   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:04.059900   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:04.061940   12544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 11:13:04.062014   12544 api_server.go:141] control plane version: v1.32.1
	I0203 11:13:04.062085   12544 api_server.go:131] duration metric: took 10.2228ms to wait for apiserver health ...
	I0203 11:13:04.062120   12544 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 11:13:04.204325   12544 request.go:632] Waited for 142.1338ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods
	I0203 11:13:04.204325   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods
	I0203 11:13:04.204325   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:04.204325   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:04.204325   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:04.215758   12544 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0203 11:13:04.225100   12544 system_pods.go:59] 24 kube-system pods found
	I0203 11:13:04.225180   12544 system_pods.go:61] "coredns-668d6bf9bc-5jzvf" [171e3213-b687-432a-b3a3-231392dddfaf] Running
	I0203 11:13:04.225180   12544 system_pods.go:61] "coredns-668d6bf9bc-r5pf5" [34df0b8e-1ae4-4e3e-a39f-9d9c505a25c4] Running
	I0203 11:13:04.225180   12544 system_pods.go:61] "etcd-ha-429000" [8462336e-5775-446f-99ed-d5a46d8f85b0] Running
	I0203 11:13:04.225180   12544 system_pods.go:61] "etcd-ha-429000-m02" [26a3c348-6476-41c8-b1f0-b2d86f3b77a2] Running
	I0203 11:13:04.225180   12544 system_pods.go:61] "etcd-ha-429000-m03" [ebe571cc-0005-4236-aa38-df20b82601d8] Running
	I0203 11:13:04.225180   12544 system_pods.go:61] "kindnet-d7lbp" [23d86f41-7e30-4da8-924f-4c6aafb9360c] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kindnet-fv8r6" [58d47479-d8ac-4a8a-b5d7-7fc71319598b] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kindnet-ss84t" [b831ad88-827e-45b8-a208-78e6bceb72e3] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-apiserver-ha-429000" [a77b61c0-ca5b-4bf0-a0df-a3f7465c7cfc] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-apiserver-ha-429000-m02" [e3df904b-ddb6-4c43-9bd8-c35136520494] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-apiserver-ha-429000-m03" [bc8b6aae-0084-4361-8e17-479a8e9b4d60] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-controller-manager-ha-429000" [df6cfc76-d0b4-4461-aa2e-cd44ebaec04a] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-controller-manager-ha-429000-m02" [89e18813-ac30-4890-a036-b86f0a9a513f] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-controller-manager-ha-429000-m03" [68b530c4-6823-46b9-a1c6-918cf1443e4a] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-proxy-2n5cz" [aa6ffe60-2b46-473c-b2c4-b45004c6aeeb] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-proxy-dhm6z" [a2f4caab-ad59-402c-b3c8-3da356385c89] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-proxy-m9nhx" [b12c48d5-de9f-4e4e-aff5-953e5f7bf001] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-scheduler-ha-429000" [997f2cf9-4a89-40cd-9d8b-fece398c4a10] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-scheduler-ha-429000-m02" [e619bf3e-cb81-41a0-bfa8-c9f6506a356e] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-scheduler-ha-429000-m03" [46b7bb6f-7c5c-4d09-af82-7b34c6022e7e] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-vip-ha-429000" [4907d066-bd93-4786-a868-9f3bd0a51f4b] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-vip-ha-429000-m02" [a53c671d-cc58-4505-901b-fe00af1f8eaa] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-vip-ha-429000-m03" [1c2bd3bd-fcb7-4fed-9f67-518e4acd72a2] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "storage-provisioner" [9cea8ac0-e49e-4a9b-8e99-2da32218657c] Running
	I0203 11:13:04.225275   12544 system_pods.go:74] duration metric: took 163.1532ms to wait for pod list to return data ...
	I0203 11:13:04.225275   12544 default_sa.go:34] waiting for default service account to be created ...
	I0203 11:13:04.404143   12544 request.go:632] Waited for 178.8664ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/default/serviceaccounts
	I0203 11:13:04.404143   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/default/serviceaccounts
	I0203 11:13:04.404143   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:04.404143   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:04.404143   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:04.410178   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:13:04.411008   12544 default_sa.go:45] found service account: "default"
	I0203 11:13:04.411008   12544 default_sa.go:55] duration metric: took 185.7309ms for default service account to be created ...
	I0203 11:13:04.411008   12544 system_pods.go:116] waiting for k8s-apps to be running ...
	I0203 11:13:04.603979   12544 request.go:632] Waited for 192.8539ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods
	I0203 11:13:04.603979   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods
	I0203 11:13:04.603979   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:04.603979   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:04.603979   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:04.613186   12544 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0203 11:13:04.626128   12544 system_pods.go:86] 24 kube-system pods found
	I0203 11:13:04.626664   12544 system_pods.go:89] "coredns-668d6bf9bc-5jzvf" [171e3213-b687-432a-b3a3-231392dddfaf] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "coredns-668d6bf9bc-r5pf5" [34df0b8e-1ae4-4e3e-a39f-9d9c505a25c4] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "etcd-ha-429000" [8462336e-5775-446f-99ed-d5a46d8f85b0] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "etcd-ha-429000-m02" [26a3c348-6476-41c8-b1f0-b2d86f3b77a2] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "etcd-ha-429000-m03" [ebe571cc-0005-4236-aa38-df20b82601d8] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kindnet-d7lbp" [23d86f41-7e30-4da8-924f-4c6aafb9360c] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kindnet-fv8r6" [58d47479-d8ac-4a8a-b5d7-7fc71319598b] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kindnet-ss84t" [b831ad88-827e-45b8-a208-78e6bceb72e3] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kube-apiserver-ha-429000" [a77b61c0-ca5b-4bf0-a0df-a3f7465c7cfc] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kube-apiserver-ha-429000-m02" [e3df904b-ddb6-4c43-9bd8-c35136520494] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kube-apiserver-ha-429000-m03" [bc8b6aae-0084-4361-8e17-479a8e9b4d60] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kube-controller-manager-ha-429000" [df6cfc76-d0b4-4461-aa2e-cd44ebaec04a] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kube-controller-manager-ha-429000-m02" [89e18813-ac30-4890-a036-b86f0a9a513f] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kube-controller-manager-ha-429000-m03" [68b530c4-6823-46b9-a1c6-918cf1443e4a] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kube-proxy-2n5cz" [aa6ffe60-2b46-473c-b2c4-b45004c6aeeb] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kube-proxy-dhm6z" [a2f4caab-ad59-402c-b3c8-3da356385c89] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kube-proxy-m9nhx" [b12c48d5-de9f-4e4e-aff5-953e5f7bf001] Running
	I0203 11:13:04.626778   12544 system_pods.go:89] "kube-scheduler-ha-429000" [997f2cf9-4a89-40cd-9d8b-fece398c4a10] Running
	I0203 11:13:04.626778   12544 system_pods.go:89] "kube-scheduler-ha-429000-m02" [e619bf3e-cb81-41a0-bfa8-c9f6506a356e] Running
	I0203 11:13:04.626778   12544 system_pods.go:89] "kube-scheduler-ha-429000-m03" [46b7bb6f-7c5c-4d09-af82-7b34c6022e7e] Running
	I0203 11:13:04.626778   12544 system_pods.go:89] "kube-vip-ha-429000" [4907d066-bd93-4786-a868-9f3bd0a51f4b] Running
	I0203 11:13:04.626778   12544 system_pods.go:89] "kube-vip-ha-429000-m02" [a53c671d-cc58-4505-901b-fe00af1f8eaa] Running
	I0203 11:13:04.626778   12544 system_pods.go:89] "kube-vip-ha-429000-m03" [1c2bd3bd-fcb7-4fed-9f67-518e4acd72a2] Running
	I0203 11:13:04.626778   12544 system_pods.go:89] "storage-provisioner" [9cea8ac0-e49e-4a9b-8e99-2da32218657c] Running
	I0203 11:13:04.626778   12544 system_pods.go:126] duration metric: took 215.7673ms to wait for k8s-apps to be running ...
	I0203 11:13:04.626778   12544 system_svc.go:44] waiting for kubelet service to be running ....
	I0203 11:13:04.633785   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:13:04.662621   12544 system_svc.go:56] duration metric: took 35.8434ms WaitForService to wait for kubelet
	I0203 11:13:04.662740   12544 kubeadm.go:582] duration metric: took 24.3508042s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 11:13:04.662740   12544 node_conditions.go:102] verifying NodePressure condition ...
	I0203 11:13:04.803979   12544 request.go:632] Waited for 141.1319ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes
	I0203 11:13:04.803979   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes
	I0203 11:13:04.803979   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:04.803979   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:04.803979   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:04.810465   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:13:04.811430   12544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 11:13:04.811600   12544 node_conditions.go:123] node cpu capacity is 2
	I0203 11:13:04.811600   12544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 11:13:04.811600   12544 node_conditions.go:123] node cpu capacity is 2
	I0203 11:13:04.811600   12544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 11:13:04.811600   12544 node_conditions.go:123] node cpu capacity is 2
	I0203 11:13:04.811600   12544 node_conditions.go:105] duration metric: took 148.8584ms to run NodePressure ...
	I0203 11:13:04.811600   12544 start.go:241] waiting for startup goroutines ...
	I0203 11:13:04.811705   12544 start.go:255] writing updated cluster config ...
	I0203 11:13:04.820233   12544 ssh_runner.go:195] Run: rm -f paused
	I0203 11:13:04.948449   12544 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0203 11:13:04.952015   12544 out.go:177] * Done! kubectl is now configured to use "ha-429000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 03 11:06:01 ha-429000 dockerd[1451]: time="2025-02-03T11:06:01.628084514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 11:06:01 ha-429000 dockerd[1451]: time="2025-02-03T11:06:01.765888854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 03 11:06:01 ha-429000 dockerd[1451]: time="2025-02-03T11:06:01.766250856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 03 11:06:01 ha-429000 dockerd[1451]: time="2025-02-03T11:06:01.766326357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 11:06:01 ha-429000 dockerd[1451]: time="2025-02-03T11:06:01.766511258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 11:06:01 ha-429000 cri-dockerd[1343]: time="2025-02-03T11:06:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/360f3e80c12181ba5c9502a791c6d37a4bd9eb76dafa9ce6bab8b358efb62d5b/resolv.conf as [nameserver 172.25.0.1]"
	Feb 03 11:06:01 ha-429000 cri-dockerd[1343]: time="2025-02-03T11:06:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2433d530b47c379da05cf21223ef2f866c380ff582510a431dac3f5733591ea4/resolv.conf as [nameserver 172.25.0.1]"
	Feb 03 11:06:02 ha-429000 dockerd[1451]: time="2025-02-03T11:06:02.151812181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 03 11:06:02 ha-429000 dockerd[1451]: time="2025-02-03T11:06:02.151886382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 03 11:06:02 ha-429000 dockerd[1451]: time="2025-02-03T11:06:02.151964882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 11:06:02 ha-429000 dockerd[1451]: time="2025-02-03T11:06:02.152076383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 11:06:02 ha-429000 dockerd[1451]: time="2025-02-03T11:06:02.185274858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 03 11:06:02 ha-429000 dockerd[1451]: time="2025-02-03T11:06:02.185613060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 03 11:06:02 ha-429000 dockerd[1451]: time="2025-02-03T11:06:02.185842061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 11:06:02 ha-429000 dockerd[1451]: time="2025-02-03T11:06:02.186136962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 11:13:40 ha-429000 dockerd[1451]: time="2025-02-03T11:13:40.234018692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 03 11:13:40 ha-429000 dockerd[1451]: time="2025-02-03T11:13:40.234146292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 03 11:13:40 ha-429000 dockerd[1451]: time="2025-02-03T11:13:40.234168192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 11:13:40 ha-429000 dockerd[1451]: time="2025-02-03T11:13:40.234356794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 11:13:40 ha-429000 cri-dockerd[1343]: time="2025-02-03T11:13:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fda7c172f55ef766a8f9d8daa3677620bbe748eb0ec4ea821c244838bdcbbc40/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 03 11:13:41 ha-429000 cri-dockerd[1343]: time="2025-02-03T11:13:41Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Feb 03 11:13:42 ha-429000 dockerd[1451]: time="2025-02-03T11:13:42.118383186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 03 11:13:42 ha-429000 dockerd[1451]: time="2025-02-03T11:13:42.119069194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 03 11:13:42 ha-429000 dockerd[1451]: time="2025-02-03T11:13:42.119169995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 11:13:42 ha-429000 dockerd[1451]: time="2025-02-03T11:13:42.119449398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b9bdb287bef2d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   fda7c172f55ef       busybox-58667487b6-hjbfz
	d82f4a32d763d       c69fa2e9cbf5f                                                                                         8 minutes ago        Running             coredns                   0                   360f3e80c1218       coredns-668d6bf9bc-r5pf5
	d9f3f914a13d8       6e38f40d628db                                                                                         8 minutes ago        Running             storage-provisioner       0                   2433d530b47c3       storage-provisioner
	d7595aa2e7664       c69fa2e9cbf5f                                                                                         8 minutes ago        Running             coredns                   0                   09ac3d992ab71       coredns-668d6bf9bc-5jzvf
	989e99ddf5bb8       kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26              8 minutes ago        Running             kindnet-cni               0                   d1ba5b18f35b5       kindnet-fv8r6
	3ad219fbdb564       e29f9c7391fd9                                                                                         9 minutes ago        Running             kube-proxy                0                   8017e667cdcc1       kube-proxy-dhm6z
	1eff3743dfbdd       ghcr.io/kube-vip/kube-vip@sha256:717b8bef2758c10042d64ae7949201ef7f243d928fce265b04e488e844bf9528     9 minutes ago        Running             kube-vip                  0                   fb97d436f0b00       kube-vip-ha-429000
	4c387526ccbee       2b0d6572d062c                                                                                         9 minutes ago        Running             kube-scheduler            0                   bbc148b7d95a2       kube-scheduler-ha-429000
	77604fa1a1e94       019ee182b58e2                                                                                         9 minutes ago        Running             kube-controller-manager   0                   944302cf57a59       kube-controller-manager-ha-429000
	6c03362e02b8f       a9e7e6b294baf                                                                                         9 minutes ago        Running             etcd                      0                   4e4522c4416d9       etcd-ha-429000
	36ff8ead4e917       95c0bda56fc4d                                                                                         9 minutes ago        Running             kube-apiserver            0                   45e0fe3e074c5       kube-apiserver-ha-429000
	
	
	==> coredns [d7595aa2e766] <==
	[INFO] 10.244.2.2:41426 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000334804s
	[INFO] 10.244.2.2:50198 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000233602s
	[INFO] 10.244.0.4:56085 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000313504s
	[INFO] 10.244.0.4:43627 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000223102s
	[INFO] 10.244.0.4:43721 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000207803s
	[INFO] 10.244.0.4:33104 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000133301s
	[INFO] 10.244.0.4:56284 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115101s
	[INFO] 10.244.0.4:60159 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000220703s
	[INFO] 10.244.1.2:36340 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000095701s
	[INFO] 10.244.1.2:60004 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139301s
	[INFO] 10.244.1.2:54510 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000199702s
	[INFO] 10.244.2.2:46423 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000291004s
	[INFO] 10.244.2.2:43421 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000212203s
	[INFO] 10.244.0.4:36256 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140401s
	[INFO] 10.244.0.4:50758 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000291403s
	[INFO] 10.244.0.4:56332 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078001s
	[INFO] 10.244.1.2:48813 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000328804s
	[INFO] 10.244.1.2:55305 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174902s
	[INFO] 10.244.2.2:60572 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000165302s
	[INFO] 10.244.2.2:37570 - 5 "PTR IN 1.0.25.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000099701s
	[INFO] 10.244.0.4:40645 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163502s
	[INFO] 10.244.0.4:36097 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000327504s
	[INFO] 10.244.0.4:32981 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108801s
	[INFO] 10.244.0.4:58940 - 5 "PTR IN 1.0.25.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000163302s
	[INFO] 10.244.1.2:34333 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000134202s
	
	
	==> coredns [d82f4a32d763] <==
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47772 - 61067 "HINFO IN 4472778490497682898.611741258674588714. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.054223085s
	[INFO] 10.244.2.2:49592 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.221214106s
	[INFO] 10.244.2.2:44736 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.198857072s
	[INFO] 10.244.0.4:56161 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.002871532s
	[INFO] 10.244.1.2:52272 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000232902s
	[INFO] 10.244.2.2:53162 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185402s
	[INFO] 10.244.2.2:48550 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000268603s
	[INFO] 10.244.0.4:54448 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000291303s
	[INFO] 10.244.0.4:40412 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013367151s
	[INFO] 10.244.1.2:41599 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135802s
	[INFO] 10.244.1.2:35082 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000143802s
	[INFO] 10.244.1.2:42027 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227103s
	[INFO] 10.244.1.2:47439 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074301s
	[INFO] 10.244.1.2:58807 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102601s
	[INFO] 10.244.2.2:54735 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188903s
	[INFO] 10.244.2.2:36301 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148201s
	[INFO] 10.244.0.4:35035 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000249503s
	[INFO] 10.244.1.2:34636 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235602s
	[INFO] 10.244.1.2:45611 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000674s
	[INFO] 10.244.2.2:38011 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166102s
	[INFO] 10.244.2.2:58643 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000180902s
	[INFO] 10.244.1.2:53892 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145002s
	[INFO] 10.244.1.2:39281 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000142802s
	[INFO] 10.244.1.2:51636 - 5 "PTR IN 1.0.25.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000131501s
	
	
	==> describe nodes <==
	Name:               ha-429000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-429000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	                    minikube.k8s.io/name=ha-429000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_03T11_05_31_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Feb 2025 11:05:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-429000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Feb 2025 11:14:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Feb 2025 11:14:02 +0000   Mon, 03 Feb 2025 11:05:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Feb 2025 11:14:02 +0000   Mon, 03 Feb 2025 11:05:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Feb 2025 11:14:02 +0000   Mon, 03 Feb 2025 11:05:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Feb 2025 11:14:02 +0000   Mon, 03 Feb 2025 11:06:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.12.47
	  Hostname:    ha-429000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b02458d3503f4e728e9c53efd3caeef4
	  System UUID:                972948bd-9976-b744-b72e-49603552f61d
	  Boot ID:                    3f567654-2fa8-43dc-ac53-52200ead206b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-hjbfz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 coredns-668d6bf9bc-5jzvf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m7s
	  kube-system                 coredns-668d6bf9bc-r5pf5             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m7s
	  kube-system                 etcd-ha-429000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m12s
	  kube-system                 kindnet-fv8r6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m7s
	  kube-system                 kube-apiserver-ha-429000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-controller-manager-ha-429000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-proxy-dhm6z                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m7s
	  kube-system                 kube-scheduler-ha-429000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-vip-ha-429000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  NodeHasSufficientPID     9m20s (x7 over 9m20s)  kubelet          Node ha-429000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m20s (x8 over 9m20s)  kubelet          Node ha-429000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s (x8 over 9m20s)  kubelet          Node ha-429000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 9m12s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m12s                  kubelet          Node ha-429000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m12s                  kubelet          Node ha-429000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m12s                  kubelet          Node ha-429000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s                   node-controller  Node ha-429000 event: Registered Node ha-429000 in Controller
	  Normal  NodeReady                8m42s                  kubelet          Node ha-429000 status is now: NodeReady
	  Normal  RegisteredNode           5m36s                  node-controller  Node ha-429000 event: Registered Node ha-429000 in Controller
	  Normal  RegisteredNode           117s                   node-controller  Node ha-429000 event: Registered Node ha-429000 in Controller
	
	
	Name:               ha-429000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-429000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	                    minikube.k8s.io/name=ha-429000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_02_03T11_09_00_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Feb 2025 11:08:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-429000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Feb 2025 11:14:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Feb 2025 11:14:01 +0000   Mon, 03 Feb 2025 11:08:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Feb 2025 11:14:01 +0000   Mon, 03 Feb 2025 11:08:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Feb 2025 11:14:01 +0000   Mon, 03 Feb 2025 11:08:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Feb 2025 11:14:01 +0000   Mon, 03 Feb 2025 11:09:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.13.142
	  Hostname:    ha-429000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 769190743e2b41f69e10b009f110a981
	  System UUID:                543620fd-d931-a645-b903-0e292a0963ba
	  Boot ID:                    693afb8d-5d43-4d67-ae21-f5181f76ea2c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-k7s2q                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 etcd-ha-429000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m47s
	  kube-system                 kindnet-d7lbp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m49s
	  kube-system                 kube-apiserver-ha-429000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m47s
	  kube-system                 kube-controller-manager-ha-429000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m47s
	  kube-system                 kube-proxy-2n5cz                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 kube-scheduler-ha-429000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m47s
	  kube-system                 kube-vip-ha-429000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m41s                  kube-proxy       
	  Normal  RegisteredNode           5m49s                  node-controller  Node ha-429000-m02 event: Registered Node ha-429000-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m49s (x8 over 5m49s)  kubelet          Node ha-429000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m49s (x8 over 5m49s)  kubelet          Node ha-429000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m49s (x7 over 5m49s)  kubelet          Node ha-429000-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-429000-m02 event: Registered Node ha-429000-m02 in Controller
	  Normal  RegisteredNode           118s                   node-controller  Node ha-429000-m02 event: Registered Node ha-429000-m02 in Controller
	
	
	Name:               ha-429000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-429000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	                    minikube.k8s.io/name=ha-429000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_02_03T11_12_39_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Feb 2025 11:12:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-429000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Feb 2025 11:14:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Feb 2025 11:14:05 +0000   Mon, 03 Feb 2025 11:12:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Feb 2025 11:14:05 +0000   Mon, 03 Feb 2025 11:12:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Feb 2025 11:14:05 +0000   Mon, 03 Feb 2025 11:12:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Feb 2025 11:14:05 +0000   Mon, 03 Feb 2025 11:12:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.0.10
	  Hostname:    ha-429000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 ccbbda46ccd846c5adf54c6a983de246
	  System UUID:                f085c3e8-6dcb-5848-90b1-62afe6e2042e
	  Boot ID:                    2f47af6c-0a16-4c42-abed-4293a55945a2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-hcrnz                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 etcd-ha-429000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m9s
	  kube-system                 kindnet-ss84t                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m10s
	  kube-system                 kube-apiserver-ha-429000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-controller-manager-ha-429000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-proxy-m9nhx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-scheduler-ha-429000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-vip-ha-429000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m4s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node ha-429000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node ha-429000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x7 over 2m10s)  kubelet          Node ha-429000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m9s                   node-controller  Node ha-429000-m03 event: Registered Node ha-429000-m03 in Controller
	  Normal  RegisteredNode           2m7s                   node-controller  Node ha-429000-m03 event: Registered Node ha-429000-m03 in Controller
	  Normal  RegisteredNode           118s                   node-controller  Node ha-429000-m03 event: Registered Node ha-429000-m03 in Controller
	
	
	==> dmesg <==
	[  +7.418749] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Feb 3 11:04] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.162509] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[ +28.900161] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +0.100368] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.496016] systemd-fstab-generator[1048]: Ignoring "noauto" option for root device
	[  +0.204271] systemd-fstab-generator[1060]: Ignoring "noauto" option for root device
	[  +0.223335] systemd-fstab-generator[1074]: Ignoring "noauto" option for root device
	[  +2.838161] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.193016] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.180216] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[  +0.258010] systemd-fstab-generator[1335]: Ignoring "noauto" option for root device
	[Feb 3 11:05] systemd-fstab-generator[1437]: Ignoring "noauto" option for root device
	[  +0.102538] kauditd_printk_skb: 206 callbacks suppressed
	[  +3.803741] systemd-fstab-generator[1703]: Ignoring "noauto" option for root device
	[  +6.271371] systemd-fstab-generator[1850]: Ignoring "noauto" option for root device
	[  +0.108048] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.162332] kauditd_printk_skb: 67 callbacks suppressed
	[  +2.891126] systemd-fstab-generator[2372]: Ignoring "noauto" option for root device
	[  +6.432468] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.700814] kauditd_printk_skb: 29 callbacks suppressed
	[Feb 3 11:08] hrtimer: interrupt took 1197708 ns
	[Feb 3 11:09] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [6c03362e02b8] <==
	{"level":"warn","ts":"2025-02-03T11:12:34.432605Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"ec877ece3c0d4bc0","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2025-02-03T11:12:34.916239Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"ec877ece3c0d4bc0","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2025-02-03T11:12:35.915243Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"ec877ece3c0d4bc0","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2025-02-03T11:12:36.415985Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"81267108d219df0f","to":"ec877ece3c0d4bc0","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-02-03T11:12:36.416037Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"ec877ece3c0d4bc0"}
	{"level":"info","ts":"2025-02-03T11:12:36.416071Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"81267108d219df0f","remote-peer-id":"ec877ece3c0d4bc0"}
	{"level":"info","ts":"2025-02-03T11:12:36.486499Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"81267108d219df0f","to":"ec877ece3c0d4bc0","stream-type":"stream Message"}
	{"level":"info","ts":"2025-02-03T11:12:36.486579Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"81267108d219df0f","remote-peer-id":"ec877ece3c0d4bc0"}
	{"level":"info","ts":"2025-02-03T11:12:36.522030Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"81267108d219df0f","remote-peer-id":"ec877ece3c0d4bc0"}
	{"level":"info","ts":"2025-02-03T11:12:36.523983Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"81267108d219df0f","remote-peer-id":"ec877ece3c0d4bc0"}
	{"level":"warn","ts":"2025-02-03T11:12:36.915206Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"ec877ece3c0d4bc0","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2025-02-03T11:12:37.455626Z","caller":"etcdserver/raft.go:426","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"ec877ece3c0d4bc0","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"1.119721ms"}
	{"level":"warn","ts":"2025-02-03T11:12:37.455692Z","caller":"etcdserver/raft.go:426","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"a4f71794fcaa9a11","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"1.190821ms"}
	{"level":"info","ts":"2025-02-03T11:12:37.456401Z","caller":"traceutil/trace.go:171","msg":"trace[189270687] transaction","detail":"{read_only:false; response_revision:1475; number_of_response:1; }","duration":"193.33857ms","start":"2025-02-03T11:12:37.263048Z","end":"2025-02-03T11:12:37.456387Z","steps":["trace[189270687] 'process raft request'  (duration: 193.152369ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-03T11:12:37.915141Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"ec877ece3c0d4bc0","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2025-02-03T11:12:38.919506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81267108d219df0f switched to configuration voters=(9306249962706296591 11886995670129351185 17043730739042798528)"}
	{"level":"info","ts":"2025-02-03T11:12:38.920071Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"a5e86338b1986a5","local-member-id":"81267108d219df0f"}
	{"level":"info","ts":"2025-02-03T11:12:38.920366Z","caller":"etcdserver/server.go:2018","msg":"applied a configuration change through raft","local-member-id":"81267108d219df0f","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"ec877ece3c0d4bc0"}
	{"level":"warn","ts":"2025-02-03T11:12:42.823165Z","caller":"etcdserver/raft.go:426","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"ec877ece3c0d4bc0","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"115.808774ms"}
	{"level":"info","ts":"2025-02-03T11:12:42.823810Z","caller":"traceutil/trace.go:171","msg":"trace[1561771791] transaction","detail":"{read_only:false; response_revision:1503; number_of_response:1; }","duration":"314.023163ms","start":"2025-02-03T11:12:42.509770Z","end":"2025-02-03T11:12:42.823793Z","steps":["trace[1561771791] 'process raft request'  (duration: 313.860262ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-03T11:12:42.827868Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-03T11:12:42.509755Z","time spent":"317.917088ms","remote":"127.0.0.1:52978","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1094,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1498 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1021 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-02-03T11:12:42.828719Z","caller":"etcdserver/raft.go:426","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"a4f71794fcaa9a11","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"121.427511ms"}
	{"level":"warn","ts":"2025-02-03T11:13:40.033833Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.979338ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-03T11:13:40.034283Z","caller":"traceutil/trace.go:171","msg":"trace[1009316578] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1752; }","duration":"112.407441ms","start":"2025-02-03T11:13:39.921795Z","end":"2025-02-03T11:13:40.034203Z","steps":["trace[1009316578] 'agreement among raft nodes before linearized reading'  (duration: 89.304488ms)","trace[1009316578] 'range keys from in-memory index tree'  (duration: 22.66925ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-03T11:14:01.896001Z","caller":"traceutil/trace.go:171","msg":"trace[313967147] transaction","detail":"{read_only:false; response_revision:1833; number_of_response:1; }","duration":"159.80729ms","start":"2025-02-03T11:14:01.736167Z","end":"2025-02-03T11:14:01.895975Z","steps":["trace[313967147] 'process raft request'  (duration: 84.725449ms)","trace[313967147] 'compare'  (duration: 74.98694ms)"],"step_count":2}
	
	
	==> kernel <==
	 11:14:43 up 11 min,  0 users,  load average: 1.01, 0.70, 0.36
	Linux ha-429000 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [989e99ddf5bb] <==
	I0203 11:13:55.682894       1 main.go:324] Node ha-429000-m03 has CIDR [10.244.2.0/24] 
	I0203 11:14:05.675613       1 main.go:297] Handling node with IPs: map[172.25.13.142:{}]
	I0203 11:14:05.675805       1 main.go:324] Node ha-429000-m02 has CIDR [10.244.1.0/24] 
	I0203 11:14:05.676068       1 main.go:297] Handling node with IPs: map[172.25.0.10:{}]
	I0203 11:14:05.676098       1 main.go:324] Node ha-429000-m03 has CIDR [10.244.2.0/24] 
	I0203 11:14:05.676523       1 main.go:297] Handling node with IPs: map[172.25.12.47:{}]
	I0203 11:14:05.676671       1 main.go:301] handling current node
	I0203 11:14:15.682756       1 main.go:297] Handling node with IPs: map[172.25.12.47:{}]
	I0203 11:14:15.683297       1 main.go:301] handling current node
	I0203 11:14:15.683409       1 main.go:297] Handling node with IPs: map[172.25.13.142:{}]
	I0203 11:14:15.683422       1 main.go:324] Node ha-429000-m02 has CIDR [10.244.1.0/24] 
	I0203 11:14:15.683833       1 main.go:297] Handling node with IPs: map[172.25.0.10:{}]
	I0203 11:14:15.684013       1 main.go:324] Node ha-429000-m03 has CIDR [10.244.2.0/24] 
	I0203 11:14:25.684356       1 main.go:297] Handling node with IPs: map[172.25.12.47:{}]
	I0203 11:14:25.684587       1 main.go:301] handling current node
	I0203 11:14:25.684609       1 main.go:297] Handling node with IPs: map[172.25.13.142:{}]
	I0203 11:14:25.684617       1 main.go:324] Node ha-429000-m02 has CIDR [10.244.1.0/24] 
	I0203 11:14:25.684963       1 main.go:297] Handling node with IPs: map[172.25.0.10:{}]
	I0203 11:14:25.685116       1 main.go:324] Node ha-429000-m03 has CIDR [10.244.2.0/24] 
	I0203 11:14:35.675745       1 main.go:297] Handling node with IPs: map[172.25.12.47:{}]
	I0203 11:14:35.675804       1 main.go:301] handling current node
	I0203 11:14:35.675988       1 main.go:297] Handling node with IPs: map[172.25.13.142:{}]
	I0203 11:14:35.676182       1 main.go:324] Node ha-429000-m02 has CIDR [10.244.1.0/24] 
	I0203 11:14:35.676885       1 main.go:297] Handling node with IPs: map[172.25.0.10:{}]
	I0203 11:14:35.676992       1 main.go:324] Node ha-429000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [36ff8ead4e91] <==
	I0203 11:05:29.736302       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0203 11:05:30.212593       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0203 11:05:30.670994       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0203 11:05:30.697507       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0203 11:05:30.722605       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0203 11:05:35.367513       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0203 11:05:35.516935       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0203 11:12:34.038934       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="PATCH" URI="/api/v1/namespaces/default/events/ha-429000-m03.1820ae58e684cb3f" auditID="022b8383-3d28-4ee5-b198-695f44f6ea74"
	E0203 11:12:34.031741       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="7.2µs" method="PATCH" path="/api/v1/namespaces/default/events/ha-429000-m03.1820ae58e684cb3f" result=null
	E0203 11:12:34.039220       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 6.9µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0203 11:13:46.144176       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58482: use of closed network connection
	E0203 11:13:47.877819       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58484: use of closed network connection
	E0203 11:13:48.331705       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58488: use of closed network connection
	E0203 11:13:48.860010       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58490: use of closed network connection
	E0203 11:13:49.322250       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58492: use of closed network connection
	E0203 11:13:49.763983       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58494: use of closed network connection
	E0203 11:13:50.210765       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58496: use of closed network connection
	E0203 11:13:50.649265       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58498: use of closed network connection
	E0203 11:13:51.109481       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58500: use of closed network connection
	E0203 11:13:51.892960       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58503: use of closed network connection
	E0203 11:14:02.348371       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58505: use of closed network connection
	E0203 11:14:02.797946       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58508: use of closed network connection
	E0203 11:14:13.266251       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58511: use of closed network connection
	E0203 11:14:13.714652       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58513: use of closed network connection
	E0203 11:14:24.150346       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58515: use of closed network connection
	
	
	==> kube-controller-manager [77604fa1a1e9] <==
	I0203 11:12:58.761961       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m03"
	I0203 11:12:59.949617       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m03"
	I0203 11:13:03.847749       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m03"
	I0203 11:13:31.101668       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m02"
	I0203 11:13:39.206776       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="130.353759ms"
	I0203 11:13:39.276744       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="69.432757ms"
	I0203 11:13:39.277122       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="52µs"
	I0203 11:13:39.278754       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="41.1µs"
	I0203 11:13:39.310332       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="59.101µs"
	I0203 11:13:39.366721       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="198.101µs"
	I0203 11:13:39.406075       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="70.101µs"
	I0203 11:13:39.407089       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="45.701µs"
	I0203 11:13:39.727480       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="287.284893ms"
	I0203 11:13:40.082344       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="354.486836ms"
	I0203 11:13:40.138062       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="55.171364ms"
	I0203 11:13:40.138181       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="46.2µs"
	I0203 11:13:42.510089       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="23.273964ms"
	I0203 11:13:42.510733       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="53.701µs"
	I0203 11:13:42.827703       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="19.36402ms"
	I0203 11:13:42.828008       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="81.701µs"
	I0203 11:13:43.297992       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="23.600667ms"
	I0203 11:13:43.298342       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="292.004µs"
	I0203 11:14:01.731735       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m02"
	I0203 11:14:02.227073       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000"
	I0203 11:14:05.087306       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m03"
	
	
	==> kube-proxy [3ad219fbdb56] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0203 11:05:38.307640       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0203 11:05:38.322687       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.12.47"]
	E0203 11:05:38.322860       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0203 11:05:38.411214       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0203 11:05:38.411297       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0203 11:05:38.411328       1 server_linux.go:170] "Using iptables Proxier"
	I0203 11:05:38.432366       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0203 11:05:38.432788       1 server.go:497] "Version info" version="v1.32.1"
	I0203 11:05:38.432826       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 11:05:38.436253       1 config.go:199] "Starting service config controller"
	I0203 11:05:38.436274       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0203 11:05:38.436296       1 config.go:105] "Starting endpoint slice config controller"
	I0203 11:05:38.436301       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0203 11:05:38.436778       1 config.go:329] "Starting node config controller"
	I0203 11:05:38.436788       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0203 11:05:38.537329       1 shared_informer.go:320] Caches are synced for node config
	I0203 11:05:38.537365       1 shared_informer.go:320] Caches are synced for service config
	I0203 11:05:38.537376       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4c387526ccbe] <==
	W0203 11:05:28.617464       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0203 11:05:28.617855       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 11:05:28.622391       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0203 11:05:28.622621       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0203 11:05:28.632706       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0203 11:05:28.633284       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 11:05:28.694851       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0203 11:05:28.695199       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 11:05:28.768345       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0203 11:05:28.768637       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0203 11:05:28.778729       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0203 11:05:28.778926       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0203 11:05:28.805838       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0203 11:05:28.805973       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0203 11:05:28.806263       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0203 11:05:28.806460       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 11:05:28.809799       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0203 11:05:28.809845       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 11:05:28.821455       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0203 11:05:28.821676       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 11:05:30.341351       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0203 11:13:39.241707       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-58667487b6-hcrnz\": pod busybox-58667487b6-hcrnz is already assigned to node \"ha-429000-m03\"" plugin="DefaultBinder" pod="default/busybox-58667487b6-hcrnz" node="ha-429000-m03"
	E0203 11:13:39.251063       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod f71e2d64-2c7a-460d-b2c5-82f234c46aec(default/busybox-58667487b6-hcrnz) wasn't assumed so cannot be forgotten" pod="default/busybox-58667487b6-hcrnz"
	E0203 11:13:39.251390       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-58667487b6-hcrnz\": pod busybox-58667487b6-hcrnz is already assigned to node \"ha-429000-m03\"" pod="default/busybox-58667487b6-hcrnz"
	I0203 11:13:39.251706       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-58667487b6-hcrnz" node="ha-429000-m03"
	
	
	==> kubelet <==
	Feb 03 11:10:30 ha-429000 kubelet[2379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 03 11:10:30 ha-429000 kubelet[2379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 03 11:11:30 ha-429000 kubelet[2379]: E0203 11:11:30.795312    2379 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 03 11:11:30 ha-429000 kubelet[2379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 03 11:11:30 ha-429000 kubelet[2379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 03 11:11:30 ha-429000 kubelet[2379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 03 11:11:30 ha-429000 kubelet[2379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 03 11:12:30 ha-429000 kubelet[2379]: E0203 11:12:30.795887    2379 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 03 11:12:30 ha-429000 kubelet[2379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 03 11:12:30 ha-429000 kubelet[2379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 03 11:12:30 ha-429000 kubelet[2379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 03 11:12:30 ha-429000 kubelet[2379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 03 11:13:30 ha-429000 kubelet[2379]: E0203 11:13:30.796606    2379 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 03 11:13:30 ha-429000 kubelet[2379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 03 11:13:30 ha-429000 kubelet[2379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 03 11:13:30 ha-429000 kubelet[2379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 03 11:13:30 ha-429000 kubelet[2379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 03 11:13:39 ha-429000 kubelet[2379]: I0203 11:13:39.226706    2379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=476.218148196 podStartE2EDuration="7m56.218148196s" podCreationTimestamp="2025-02-03 11:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-03 11:06:03.154761171 +0000 UTC m=+32.586391302" watchObservedRunningTime="2025-02-03 11:13:39.218148196 +0000 UTC m=+488.649778427"
	Feb 03 11:13:39 ha-429000 kubelet[2379]: I0203 11:13:39.361478    2379 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlng8\" (UniqueName: \"kubernetes.io/projected/b607381c-4671-4960-b9ea-a22065ada1b9-kube-api-access-nlng8\") pod \"busybox-58667487b6-hjbfz\" (UID: \"b607381c-4671-4960-b9ea-a22065ada1b9\") " pod="default/busybox-58667487b6-hjbfz"
	Feb 03 11:13:40 ha-429000 kubelet[2379]: I0203 11:13:40.421251    2379 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fda7c172f55ef766a8f9d8daa3677620bbe748eb0ec4ea821c244838bdcbbc40"
	Feb 03 11:14:30 ha-429000 kubelet[2379]: E0203 11:14:30.796652    2379 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 03 11:14:30 ha-429000 kubelet[2379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 03 11:14:30 ha-429000 kubelet[2379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 03 11:14:30 ha-429000 kubelet[2379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 03 11:14:30 ha-429000 kubelet[2379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-429000 -n ha-429000
E0203 11:14:48.925082    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-429000 -n ha-429000: (11.2369146s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-429000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (65.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (162.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-429000 node start m02 -v=7 --alsologtostderr: exit status 1 (1m26.8044405s)

                                                
                                                
-- stdout --
	* Starting "ha-429000-m02" control-plane node in "ha-429000" cluster
	* Restarting existing hyperv VM for "ha-429000-m02" ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 11:31:10.480183   10496 out.go:345] Setting OutFile to fd 2000 ...
	I0203 11:31:11.116215   10496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:31:11.116215   10496 out.go:358] Setting ErrFile to fd 1384...
	I0203 11:31:11.116297   10496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:31:11.129255   10496 mustload.go:65] Loading cluster: ha-429000
	I0203 11:31:11.129905   10496 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:31:11.131264   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:31:13.094969   10496 main.go:141] libmachine: [stdout =====>] : Off
	
	I0203 11:31:13.094969   10496 main.go:141] libmachine: [stderr =====>] : 
	W0203 11:31:13.094969   10496 host.go:58] "ha-429000-m02" host status: Stopped
	I0203 11:31:13.099603   10496 out.go:177] * Starting "ha-429000-m02" control-plane node in "ha-429000" cluster
	I0203 11:31:13.102417   10496 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 11:31:13.102466   10496 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0203 11:31:13.102466   10496 cache.go:56] Caching tarball of preloaded images
	I0203 11:31:13.103069   10496 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 11:31:13.103069   10496 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0203 11:31:13.103069   10496 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\config.json ...
	I0203 11:31:13.104835   10496 start.go:360] acquireMachinesLock for ha-429000-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 11:31:13.105430   10496 start.go:364] duration metric: took 68.2µs to acquireMachinesLock for "ha-429000-m02"
	I0203 11:31:13.105430   10496 start.go:96] Skipping create...Using existing machine configuration
	I0203 11:31:13.105430   10496 fix.go:54] fixHost starting: m02
	I0203 11:31:13.106107   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:31:15.077732   10496 main.go:141] libmachine: [stdout =====>] : Off
	
	I0203 11:31:15.077808   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:31:15.077808   10496 fix.go:112] recreateIfNeeded on ha-429000-m02: state=Stopped err=<nil>
	W0203 11:31:15.077857   10496 fix.go:138] unexpected machine state, will restart: <nil>
	I0203 11:31:15.080973   10496 out.go:177] * Restarting existing hyperv VM for "ha-429000-m02" ...
	I0203 11:31:15.082751   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-429000-m02
	I0203 11:31:17.994049   10496 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:31:17.994126   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:31:17.994126   10496 main.go:141] libmachine: Waiting for host to start...
	I0203 11:31:17.994203   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:31:20.071468   10496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:31:20.072176   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:31:20.072262   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:31:22.409668   10496 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:31:22.409668   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:31:23.410358   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:31:25.443120   10496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:31:25.443536   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:31:25.443631   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:31:27.777855   10496 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:31:27.777855   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:31:28.778615   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:31:30.790507   10496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:31:30.791552   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:31:30.791618   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:31:33.097545   10496 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:31:33.097545   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:31:34.098282   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:31:36.191249   10496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:31:36.191810   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:31:36.191810   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:31:38.509965   10496 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:31:38.510314   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:31:39.510440   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:31:41.540078   10496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:31:41.540252   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:31:41.540252   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:31:44.087138   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33
	
	I0203 11:31:44.087709   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:31:44.089964   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:31:46.104060   10496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:31:46.105088   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:31:46.105239   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:31:48.506912   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33
	
	I0203 11:31:48.507602   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:31:48.507879   10496 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\config.json ...
	I0203 11:31:48.510273   10496 machine.go:93] provisionDockerMachine start ...
	I0203 11:31:48.510433   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:31:50.526612   10496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:31:50.526612   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:31:50.526612   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:31:52.940007   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33
	
	I0203 11:31:52.940007   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:31:52.946306   10496 main.go:141] libmachine: Using SSH client type: native
	I0203 11:31:52.947187   10496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.5.33 22 <nil> <nil>}
	I0203 11:31:52.947187   10496 main.go:141] libmachine: About to run SSH command:
	hostname
	I0203 11:31:53.082127   10496 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0203 11:31:53.082249   10496 buildroot.go:166] provisioning hostname "ha-429000-m02"
	I0203 11:31:53.082249   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:31:55.101044   10496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:31:55.101044   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:31:55.101044   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:31:57.487612   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33
	
	I0203 11:31:57.487915   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:31:57.491875   10496 main.go:141] libmachine: Using SSH client type: native
	I0203 11:31:57.492494   10496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.5.33 22 <nil> <nil>}
	I0203 11:31:57.492494   10496 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-429000-m02 && echo "ha-429000-m02" | sudo tee /etc/hostname
	I0203 11:31:57.660315   10496 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-429000-m02
	
	I0203 11:31:57.660315   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:31:59.675907   10496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:31:59.675907   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:31:59.675907   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:32:02.050680   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33
	
	I0203 11:32:02.050680   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:32:02.055766   10496 main.go:141] libmachine: Using SSH client type: native
	I0203 11:32:02.055766   10496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.5.33 22 <nil> <nil>}
	I0203 11:32:02.055766   10496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-429000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-429000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-429000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 11:32:02.195036   10496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 11:32:02.195153   10496 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0203 11:32:02.195210   10496 buildroot.go:174] setting up certificates
	I0203 11:32:02.195210   10496 provision.go:84] configureAuth start
	I0203 11:32:02.195271   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:32:04.218574   10496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:32:04.218929   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:32:04.218929   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:32:06.551277   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33
	
	I0203 11:32:06.551277   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:32:06.552062   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:32:08.546295   10496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:32:08.546445   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:32:08.546445   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:32:10.912341   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33
	
	I0203 11:32:10.913245   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:32:10.913424   10496 provision.go:143] copyHostCerts
	I0203 11:32:10.913699   10496 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0203 11:32:10.914069   10496 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0203 11:32:10.914119   10496 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0203 11:32:10.914653   10496 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0203 11:32:10.916175   10496 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0203 11:32:10.916175   10496 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0203 11:32:10.916175   10496 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0203 11:32:10.916980   10496 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0203 11:32:10.918329   10496 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0203 11:32:10.918573   10496 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0203 11:32:10.918631   10496 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0203 11:32:10.919173   10496 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0203 11:32:10.919692   10496 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-429000-m02 san=[127.0.0.1 172.25.5.33 ha-429000-m02 localhost minikube]
	I0203 11:32:11.068479   10496 provision.go:177] copyRemoteCerts
	I0203 11:32:11.075922   10496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 11:32:11.075922   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:32:13.071721   10496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:32:13.072595   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:32:13.072702   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:32:15.433893   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33
	
	I0203 11:32:15.433893   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:32:15.435242   10496 sshutil.go:53] new ssh client: &{IP:172.25.5.33 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\id_rsa Username:docker}
	I0203 11:32:15.548085   10496 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4720625s)
	I0203 11:32:15.548185   10496 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0203 11:32:15.548333   10496 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0203 11:32:15.595852   10496 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0203 11:32:15.596871   10496 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0203 11:32:15.641752   10496 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0203 11:32:15.642081   10496 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0203 11:32:15.686542   10496 provision.go:87] duration metric: took 13.4911813s to configureAuth
	I0203 11:32:15.686542   10496 buildroot.go:189] setting minikube options for container-runtime
	I0203 11:32:15.686797   10496 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:32:15.687255   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:32:17.678461   10496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:32:17.678461   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:32:17.678536   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:32:20.047303   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33
	
	I0203 11:32:20.048054   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:32:20.051894   10496 main.go:141] libmachine: Using SSH client type: native
	I0203 11:32:20.052581   10496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.5.33 22 <nil> <nil>}
	I0203 11:32:20.052581   10496 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 11:32:20.182862   10496 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0203 11:32:20.182968   10496 buildroot.go:70] root file system type: tmpfs
	I0203 11:32:20.183235   10496 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 11:32:20.183327   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:32:22.151441   10496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:32:22.151441   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:32:22.151543   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:32:24.524075   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33
	
	I0203 11:32:24.524128   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:32:24.527456   10496 main.go:141] libmachine: Using SSH client type: native
	I0203 11:32:24.528053   10496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.5.33 22 <nil> <nil>}
	I0203 11:32:24.528053   10496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 11:32:24.685837   10496 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 11:32:24.685837   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:32:26.660961   10496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:32:26.660961   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:32:26.660961   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:32:29.046417   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33
	
	I0203 11:32:29.046417   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:32:29.054512   10496 main.go:141] libmachine: Using SSH client type: native
	I0203 11:32:29.054512   10496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.5.33 22 <nil> <nil>}
	I0203 11:32:29.054512   10496 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 11:32:31.565930   10496 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0203 11:32:31.566040   10496 machine.go:96] duration metric: took 43.0552852s to provisionDockerMachine
	I0203 11:32:31.566136   10496 start.go:293] postStartSetup for "ha-429000-m02" (driver="hyperv")
	I0203 11:32:31.566213   10496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 11:32:31.576311   10496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 11:32:31.576311   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:32:33.541307   10496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:32:33.541307   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:32:33.541307   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:32:35.909754   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33
	
	I0203 11:32:35.909754   10496 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:32:35.909754   10496 sshutil.go:53] new ssh client: &{IP:172.25.5.33 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\id_rsa Username:docker}
	I0203 11:32:36.019961   10496 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4436002s)
	I0203 11:32:36.030380   10496 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 11:32:36.037589   10496 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 11:32:36.037589   10496 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0203 11:32:36.037589   10496 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0203 11:32:36.038617   10496 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> 54522.pem in /etc/ssl/certs
	I0203 11:32:36.038617   10496 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /etc/ssl/certs/54522.pem
	I0203 11:32:36.047751   10496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 11:32:36.068007   10496 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /etc/ssl/certs/54522.pem (1708 bytes)
	I0203 11:32:36.114411   10496 start.go:296] duration metric: took 4.5481823s for postStartSetup
	I0203 11:32:36.123106   10496 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0203 11:32:36.123106   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state

                                                
                                                
** /stderr **
ha_test.go:424: I0203 11:31:10.480183   10496 out.go:345] Setting OutFile to fd 2000 ...
I0203 11:31:11.116215   10496 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 11:31:11.116215   10496 out.go:358] Setting ErrFile to fd 1384...
I0203 11:31:11.116297   10496 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 11:31:11.129255   10496 mustload.go:65] Loading cluster: ha-429000
I0203 11:31:11.129905   10496 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0203 11:31:11.131264   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
I0203 11:31:13.094969   10496 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0203 11:31:13.094969   10496 main.go:141] libmachine: [stderr =====>] : 
W0203 11:31:13.094969   10496 host.go:58] "ha-429000-m02" host status: Stopped
I0203 11:31:13.099603   10496 out.go:177] * Starting "ha-429000-m02" control-plane node in "ha-429000" cluster
I0203 11:31:13.102417   10496 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
I0203 11:31:13.102466   10496 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
I0203 11:31:13.102466   10496 cache.go:56] Caching tarball of preloaded images
I0203 11:31:13.103069   10496 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0203 11:31:13.103069   10496 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
I0203 11:31:13.103069   10496 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\config.json ...
I0203 11:31:13.104835   10496 start.go:360] acquireMachinesLock for ha-429000-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0203 11:31:13.105430   10496 start.go:364] duration metric: took 68.2µs to acquireMachinesLock for "ha-429000-m02"
I0203 11:31:13.105430   10496 start.go:96] Skipping create...Using existing machine configuration
I0203 11:31:13.105430   10496 fix.go:54] fixHost starting: m02
I0203 11:31:13.106107   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
I0203 11:31:15.077732   10496 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0203 11:31:15.077808   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:31:15.077808   10496 fix.go:112] recreateIfNeeded on ha-429000-m02: state=Stopped err=<nil>
W0203 11:31:15.077857   10496 fix.go:138] unexpected machine state, will restart: <nil>
I0203 11:31:15.080973   10496 out.go:177] * Restarting existing hyperv VM for "ha-429000-m02" ...
I0203 11:31:15.082751   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-429000-m02
I0203 11:31:17.994049   10496 main.go:141] libmachine: [stdout =====>] : 
I0203 11:31:17.994126   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:31:17.994126   10496 main.go:141] libmachine: Waiting for host to start...
I0203 11:31:17.994203   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
I0203 11:31:20.071468   10496 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 11:31:20.072176   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:31:20.072262   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
I0203 11:31:22.409668   10496 main.go:141] libmachine: [stdout =====>] : 
I0203 11:31:22.409668   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:31:23.410358   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
I0203 11:31:25.443120   10496 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 11:31:25.443536   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:31:25.443631   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
I0203 11:31:27.777855   10496 main.go:141] libmachine: [stdout =====>] : 
I0203 11:31:27.777855   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:31:28.778615   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
I0203 11:31:30.790507   10496 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 11:31:30.791552   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:31:30.791618   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
I0203 11:31:33.097545   10496 main.go:141] libmachine: [stdout =====>] : 
I0203 11:31:33.097545   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:31:34.098282   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
I0203 11:31:36.191249   10496 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 11:31:36.191810   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:31:36.191810   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
I0203 11:31:38.509965   10496 main.go:141] libmachine: [stdout =====>] : 
I0203 11:31:38.510314   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:31:39.510440   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
I0203 11:31:41.540078   10496 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 11:31:41.540252   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:31:41.540252   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
I0203 11:31:44.087138   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33

                                                
                                                
I0203 11:31:44.087709   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:31:44.089964   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
I0203 11:31:46.104060   10496 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 11:31:46.105088   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:31:46.105239   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
I0203 11:31:48.506912   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33

                                                
                                                
I0203 11:31:48.507602   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:31:48.507879   10496 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\config.json ...
I0203 11:31:48.510273   10496 machine.go:93] provisionDockerMachine start ...
I0203 11:31:48.510433   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
I0203 11:31:50.526612   10496 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 11:31:50.526612   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:31:50.526612   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
I0203 11:31:52.940007   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33

                                                
                                                
I0203 11:31:52.940007   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:31:52.946306   10496 main.go:141] libmachine: Using SSH client type: native
I0203 11:31:52.947187   10496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.5.33 22 <nil> <nil>}
I0203 11:31:52.947187   10496 main.go:141] libmachine: About to run SSH command:
hostname
I0203 11:31:53.082127   10496 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0203 11:31:53.082249   10496 buildroot.go:166] provisioning hostname "ha-429000-m02"
I0203 11:31:53.082249   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
I0203 11:31:55.101044   10496 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 11:31:55.101044   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:31:55.101044   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
I0203 11:31:57.487612   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33

                                                
                                                
I0203 11:31:57.487915   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:31:57.491875   10496 main.go:141] libmachine: Using SSH client type: native
I0203 11:31:57.492494   10496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.5.33 22 <nil> <nil>}
I0203 11:31:57.492494   10496 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-429000-m02 && echo "ha-429000-m02" | sudo tee /etc/hostname
I0203 11:31:57.660315   10496 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-429000-m02

                                                
                                                
I0203 11:31:57.660315   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
I0203 11:31:59.675907   10496 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 11:31:59.675907   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:31:59.675907   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
I0203 11:32:02.050680   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33

                                                
                                                
I0203 11:32:02.050680   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:32:02.055766   10496 main.go:141] libmachine: Using SSH client type: native
I0203 11:32:02.055766   10496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.5.33 22 <nil> <nil>}
I0203 11:32:02.055766   10496 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-429000-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-429000-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-429000-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I0203 11:32:02.195036   10496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0203 11:32:02.195153   10496 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
I0203 11:32:02.195210   10496 buildroot.go:174] setting up certificates
I0203 11:32:02.195210   10496 provision.go:84] configureAuth start
I0203 11:32:02.195271   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
I0203 11:32:04.218574   10496 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 11:32:04.218929   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:32:04.218929   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
I0203 11:32:06.551277   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33

                                                
                                                
I0203 11:32:06.551277   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:32:06.552062   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
I0203 11:32:08.546295   10496 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 11:32:08.546445   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:32:08.546445   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
I0203 11:32:10.912341   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33

                                                
                                                
I0203 11:32:10.913245   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:32:10.913424   10496 provision.go:143] copyHostCerts
I0203 11:32:10.913699   10496 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
I0203 11:32:10.914069   10496 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
I0203 11:32:10.914119   10496 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
I0203 11:32:10.914653   10496 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
I0203 11:32:10.916175   10496 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
I0203 11:32:10.916175   10496 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
I0203 11:32:10.916175   10496 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
I0203 11:32:10.916980   10496 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
I0203 11:32:10.918329   10496 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
I0203 11:32:10.918573   10496 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
I0203 11:32:10.918631   10496 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
I0203 11:32:10.919173   10496 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
I0203 11:32:10.919692   10496 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-429000-m02 san=[127.0.0.1 172.25.5.33 ha-429000-m02 localhost minikube]
I0203 11:32:11.068479   10496 provision.go:177] copyRemoteCerts
I0203 11:32:11.075922   10496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0203 11:32:11.075922   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
I0203 11:32:13.071721   10496 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 11:32:13.072595   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:32:13.072702   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
I0203 11:32:15.433893   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33

                                                
                                                
I0203 11:32:15.433893   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:32:15.435242   10496 sshutil.go:53] new ssh client: &{IP:172.25.5.33 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\id_rsa Username:docker}
I0203 11:32:15.548085   10496 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4720625s)
I0203 11:32:15.548185   10496 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
I0203 11:32:15.548333   10496 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0203 11:32:15.595852   10496 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
I0203 11:32:15.596871   10496 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
I0203 11:32:15.641752   10496 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
I0203 11:32:15.642081   10496 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0203 11:32:15.686542   10496 provision.go:87] duration metric: took 13.4911813s to configureAuth
I0203 11:32:15.686542   10496 buildroot.go:189] setting minikube options for container-runtime
I0203 11:32:15.686797   10496 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0203 11:32:15.687255   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
I0203 11:32:17.678461   10496 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 11:32:17.678461   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:32:17.678536   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
I0203 11:32:20.047303   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33

                                                
                                                
I0203 11:32:20.048054   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:32:20.051894   10496 main.go:141] libmachine: Using SSH client type: native
I0203 11:32:20.052581   10496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.5.33 22 <nil> <nil>}
I0203 11:32:20.052581   10496 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0203 11:32:20.182862   10496 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0203 11:32:20.182968   10496 buildroot.go:70] root file system type: tmpfs
I0203 11:32:20.183235   10496 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0203 11:32:20.183327   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
I0203 11:32:22.151441   10496 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 11:32:22.151441   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:32:22.151543   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
I0203 11:32:24.524075   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33

                                                
                                                
I0203 11:32:24.524128   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:32:24.527456   10496 main.go:141] libmachine: Using SSH client type: native
I0203 11:32:24.528053   10496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.5.33 22 <nil> <nil>}
I0203 11:32:24.528053   10496 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0203 11:32:24.685837   10496 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0203 11:32:24.685837   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
I0203 11:32:26.660961   10496 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 11:32:26.660961   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:32:26.660961   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
I0203 11:32:29.046417   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33

                                                
                                                
I0203 11:32:29.046417   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:32:29.054512   10496 main.go:141] libmachine: Using SSH client type: native
I0203 11:32:29.054512   10496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.5.33 22 <nil> <nil>}
I0203 11:32:29.054512   10496 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0203 11:32:31.565930   10496 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I0203 11:32:31.566040   10496 machine.go:96] duration metric: took 43.0552852s to provisionDockerMachine
I0203 11:32:31.566136   10496 start.go:293] postStartSetup for "ha-429000-m02" (driver="hyperv")
I0203 11:32:31.566213   10496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0203 11:32:31.576311   10496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0203 11:32:31.576311   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
I0203 11:32:33.541307   10496 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 11:32:33.541307   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:32:33.541307   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
I0203 11:32:35.909754   10496 main.go:141] libmachine: [stdout =====>] : 172.25.5.33

                                                
                                                
I0203 11:32:35.909754   10496 main.go:141] libmachine: [stderr =====>] : 
I0203 11:32:35.909754   10496 sshutil.go:53] new ssh client: &{IP:172.25.5.33 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\id_rsa Username:docker}
I0203 11:32:36.019961   10496 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4436002s)
I0203 11:32:36.030380   10496 ssh_runner.go:195] Run: cat /etc/os-release
I0203 11:32:36.037589   10496 info.go:137] Remote host: Buildroot 2023.02.9
I0203 11:32:36.037589   10496 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
I0203 11:32:36.037589   10496 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
I0203 11:32:36.038617   10496 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> 54522.pem in /etc/ssl/certs
I0203 11:32:36.038617   10496 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /etc/ssl/certs/54522.pem
I0203 11:32:36.047751   10496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0203 11:32:36.068007   10496 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /etc/ssl/certs/54522.pem (1708 bytes)
I0203 11:32:36.114411   10496 start.go:296] duration metric: took 4.5481823s for postStartSetup
I0203 11:32:36.123106   10496 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0203 11:32:36.123106   10496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-windows-amd64.exe -p ha-429000 node start m02 -v=7 --alsologtostderr": exit status 1
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr: context deadline exceeded (751.4µs)
I0203 11:32:37.223649    5452 retry.go:31] will retry after 744.280013ms: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I0203 11:32:37.968909    5452 retry.go:31] will retry after 911.592514ms: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I0203 11:32:38.881482    5452 retry.go:31] will retry after 2.274491152s: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I0203 11:32:41.156427    5452 retry.go:31] will retry after 4.796450758s: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I0203 11:32:45.953207    5452 retry.go:31] will retry after 3.064819943s: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr: context deadline exceeded (405.1µs)
I0203 11:32:49.019422    5452 retry.go:31] will retry after 5.0756037s: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr: context deadline exceeded (219µs)
I0203 11:32:54.096132    5452 retry.go:31] will retry after 13.551610975s: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I0203 11:33:07.647935    5452 retry.go:31] will retry after 12.562090075s: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:434: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-429000 -n ha-429000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-429000 -n ha-429000: (11.3974643s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 logs -n 25: (8.0698342s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| ssh     | ha-429000 ssh -n                                                                                                         | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:25 UTC | 03 Feb 25 11:25 UTC |
	|         | ha-429000-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-429000 cp ha-429000-m03:/home/docker/cp-test.txt                                                                      | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:25 UTC | 03 Feb 25 11:25 UTC |
	|         | ha-429000:/home/docker/cp-test_ha-429000-m03_ha-429000.txt                                                               |           |                   |         |                     |                     |
	| ssh     | ha-429000 ssh -n                                                                                                         | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:25 UTC | 03 Feb 25 11:25 UTC |
	|         | ha-429000-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-429000 ssh -n ha-429000 sudo cat                                                                                      | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:25 UTC | 03 Feb 25 11:26 UTC |
	|         | /home/docker/cp-test_ha-429000-m03_ha-429000.txt                                                                         |           |                   |         |                     |                     |
	| cp      | ha-429000 cp ha-429000-m03:/home/docker/cp-test.txt                                                                      | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:26 UTC | 03 Feb 25 11:26 UTC |
	|         | ha-429000-m02:/home/docker/cp-test_ha-429000-m03_ha-429000-m02.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-429000 ssh -n                                                                                                         | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:26 UTC | 03 Feb 25 11:26 UTC |
	|         | ha-429000-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-429000 ssh -n ha-429000-m02 sudo cat                                                                                  | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:26 UTC | 03 Feb 25 11:26 UTC |
	|         | /home/docker/cp-test_ha-429000-m03_ha-429000-m02.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-429000 cp ha-429000-m03:/home/docker/cp-test.txt                                                                      | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:26 UTC | 03 Feb 25 11:26 UTC |
	|         | ha-429000-m04:/home/docker/cp-test_ha-429000-m03_ha-429000-m04.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-429000 ssh -n                                                                                                         | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:26 UTC | 03 Feb 25 11:27 UTC |
	|         | ha-429000-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-429000 ssh -n ha-429000-m04 sudo cat                                                                                  | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:27 UTC | 03 Feb 25 11:27 UTC |
	|         | /home/docker/cp-test_ha-429000-m03_ha-429000-m04.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-429000 cp testdata\cp-test.txt                                                                                        | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:27 UTC | 03 Feb 25 11:27 UTC |
	|         | ha-429000-m04:/home/docker/cp-test.txt                                                                                   |           |                   |         |                     |                     |
	| ssh     | ha-429000 ssh -n                                                                                                         | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:27 UTC | 03 Feb 25 11:27 UTC |
	|         | ha-429000-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-429000 cp ha-429000-m04:/home/docker/cp-test.txt                                                                      | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:27 UTC | 03 Feb 25 11:27 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile602855653\001\cp-test_ha-429000-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-429000 ssh -n                                                                                                         | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:27 UTC | 03 Feb 25 11:27 UTC |
	|         | ha-429000-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-429000 cp ha-429000-m04:/home/docker/cp-test.txt                                                                      | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:27 UTC | 03 Feb 25 11:28 UTC |
	|         | ha-429000:/home/docker/cp-test_ha-429000-m04_ha-429000.txt                                                               |           |                   |         |                     |                     |
	| ssh     | ha-429000 ssh -n                                                                                                         | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:28 UTC | 03 Feb 25 11:28 UTC |
	|         | ha-429000-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-429000 ssh -n ha-429000 sudo cat                                                                                      | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:28 UTC | 03 Feb 25 11:28 UTC |
	|         | /home/docker/cp-test_ha-429000-m04_ha-429000.txt                                                                         |           |                   |         |                     |                     |
	| cp      | ha-429000 cp ha-429000-m04:/home/docker/cp-test.txt                                                                      | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:28 UTC | 03 Feb 25 11:28 UTC |
	|         | ha-429000-m02:/home/docker/cp-test_ha-429000-m04_ha-429000-m02.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-429000 ssh -n                                                                                                         | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:28 UTC | 03 Feb 25 11:28 UTC |
	|         | ha-429000-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-429000 ssh -n ha-429000-m02 sudo cat                                                                                  | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:28 UTC | 03 Feb 25 11:28 UTC |
	|         | /home/docker/cp-test_ha-429000-m04_ha-429000-m02.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-429000 cp ha-429000-m04:/home/docker/cp-test.txt                                                                      | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:28 UTC | 03 Feb 25 11:29 UTC |
	|         | ha-429000-m03:/home/docker/cp-test_ha-429000-m04_ha-429000-m03.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-429000 ssh -n                                                                                                         | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:29 UTC | 03 Feb 25 11:29 UTC |
	|         | ha-429000-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-429000 ssh -n ha-429000-m03 sudo cat                                                                                  | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:29 UTC | 03 Feb 25 11:29 UTC |
	|         | /home/docker/cp-test_ha-429000-m04_ha-429000-m03.txt                                                                     |           |                   |         |                     |                     |
	| node    | ha-429000 node stop m02 -v=7                                                                                             | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:29 UTC | 03 Feb 25 11:29 UTC |
	|         | --alsologtostderr                                                                                                        |           |                   |         |                     |                     |
	| node    | ha-429000 node start m02 -v=7                                                                                            | ha-429000 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:31 UTC |                     |
	|         | --alsologtostderr                                                                                                        |           |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/03 11:02:36
	Running on machine: minikube5
	Binary: Built with gc go1.23.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 11:02:36.636040   12544 out.go:345] Setting OutFile to fd 1628 ...
	I0203 11:02:36.695209   12544 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:02:36.695209   12544 out.go:358] Setting ErrFile to fd 392...
	I0203 11:02:36.695209   12544 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:02:36.715129   12544 out.go:352] Setting JSON to false
	I0203 11:02:36.717962   12544 start.go:129] hostinfo: {"hostname":"minikube5","uptime":165158,"bootTime":1738415398,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5371 Build 19045.5371","kernelVersion":"10.0.19045.5371 Build 19045.5371","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0203 11:02:36.718059   12544 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0203 11:02:36.724491   12544 out.go:177] * [ha-429000] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	I0203 11:02:36.728915   12544 notify.go:220] Checking for updates...
	I0203 11:02:36.730973   12544 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 11:02:36.733322   12544 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 11:02:36.735558   12544 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0203 11:02:36.737932   12544 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 11:02:36.740356   12544 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 11:02:36.743141   12544 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 11:02:41.619465   12544 out.go:177] * Using the hyperv driver based on user configuration
	I0203 11:02:41.625437   12544 start.go:297] selected driver: hyperv
	I0203 11:02:41.625437   12544 start.go:901] validating driver "hyperv" against <nil>
	I0203 11:02:41.625437   12544 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 11:02:41.671256   12544 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0203 11:02:41.672472   12544 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 11:02:41.672472   12544 cni.go:84] Creating CNI manager for ""
	I0203 11:02:41.672472   12544 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0203 11:02:41.672472   12544 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0203 11:02:41.673083   12544 start.go:340] cluster config:
	{Name:ha-429000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-429000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0203 11:02:41.673083   12544 iso.go:125] acquiring lock: {Name:mkae681ee414e9275e9685c6bbf5080b17ead976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 11:02:41.677983   12544 out.go:177] * Starting "ha-429000" primary control-plane node in "ha-429000" cluster
	I0203 11:02:41.686815   12544 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 11:02:41.686815   12544 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0203 11:02:41.686815   12544 cache.go:56] Caching tarball of preloaded images
	I0203 11:02:41.687585   12544 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 11:02:41.687585   12544 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0203 11:02:41.688971   12544 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\config.json ...
	I0203 11:02:41.689732   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\config.json: {Name:mk7825012338486fc7b9918dde319dc426284704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:02:41.691089   12544 start.go:360] acquireMachinesLock for ha-429000: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 11:02:41.691089   12544 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-429000"
	I0203 11:02:41.691089   12544 start.go:93] Provisioning new machine with config: &{Name:ha-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-429000 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 11:02:41.691745   12544 start.go:125] createHost starting for "" (driver="hyperv")
	I0203 11:02:41.695079   12544 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0203 11:02:41.695844   12544 start.go:159] libmachine.API.Create for "ha-429000" (driver="hyperv")
	I0203 11:02:41.695916   12544 client.go:168] LocalClient.Create starting
	I0203 11:02:41.696369   12544 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0203 11:02:41.696610   12544 main.go:141] libmachine: Decoding PEM data...
	I0203 11:02:41.696647   12544 main.go:141] libmachine: Parsing certificate...
	I0203 11:02:41.696820   12544 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0203 11:02:41.696886   12544 main.go:141] libmachine: Decoding PEM data...
	I0203 11:02:41.696886   12544 main.go:141] libmachine: Parsing certificate...
	I0203 11:02:41.696886   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0203 11:02:43.589007   12544 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0203 11:02:43.589106   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:02:43.589106   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0203 11:02:45.192205   12544 main.go:141] libmachine: [stdout =====>] : False
	
	I0203 11:02:45.192407   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:02:45.192407   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0203 11:02:46.631999   12544 main.go:141] libmachine: [stdout =====>] : True
	
	I0203 11:02:46.631999   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:02:46.632794   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0203 11:02:49.913390   12544 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0203 11:02:49.913390   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:02:49.914706   12544 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0203 11:02:50.356920   12544 main.go:141] libmachine: Creating SSH key...
	I0203 11:02:50.472928   12544 main.go:141] libmachine: Creating VM...
	I0203 11:02:50.472928   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0203 11:02:53.034927   12544 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0203 11:02:53.035299   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:02:53.035395   12544 main.go:141] libmachine: Using switch "Default Switch"
	I0203 11:02:53.035395   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0203 11:02:54.640401   12544 main.go:141] libmachine: [stdout =====>] : True
	
	I0203 11:02:54.640810   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:02:54.640810   12544 main.go:141] libmachine: Creating VHD
	I0203 11:02:54.640929   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0203 11:02:58.250878   12544 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : ABEDF975-BA03-4A02-84F3-295B7D025EC3
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0203 11:02:58.250878   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:02:58.250878   12544 main.go:141] libmachine: Writing magic tar header
	I0203 11:02:58.250878   12544 main.go:141] libmachine: Writing SSH key tar header
	I0203 11:02:58.263124   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0203 11:03:01.245829   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:03:01.246561   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:01.246561   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\disk.vhd' -SizeBytes 20000MB
	I0203 11:03:03.608156   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:03:03.608156   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:03.609021   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-429000 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0203 11:03:06.977987   12544 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-429000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0203 11:03:06.977987   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:06.978504   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-429000 -DynamicMemoryEnabled $false
	I0203 11:03:09.103029   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:03:09.103029   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:09.103460   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-429000 -Count 2
	I0203 11:03:11.099198   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:03:11.099198   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:11.099198   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-429000 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\boot2docker.iso'
	I0203 11:03:13.447575   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:03:13.447575   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:13.447878   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-429000 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\disk.vhd'
	I0203 11:03:15.897161   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:03:15.897161   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:15.897161   12544 main.go:141] libmachine: Starting VM...
	I0203 11:03:15.897256   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-429000
	I0203 11:03:18.768569   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:03:18.768762   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:18.768762   12544 main.go:141] libmachine: Waiting for host to start...
	I0203 11:03:18.768762   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:03:20.829320   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:03:20.829320   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:20.829320   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:03:23.138898   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:03:23.139461   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:24.139579   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:03:26.113236   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:03:26.113236   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:26.114222   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:03:28.422252   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:03:28.422252   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:29.423373   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:03:31.442167   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:03:31.442659   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:31.442659   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:03:33.751203   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:03:33.751203   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:34.753082   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:03:36.769493   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:03:36.769493   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:36.769577   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:03:39.086225   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:03:39.086225   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:40.088027   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:03:42.125238   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:03:42.125277   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:42.125277   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:03:44.533227   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:03:44.533227   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:44.533660   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:03:46.545049   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:03:46.545049   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:46.545049   12544 machine.go:93] provisionDockerMachine start ...
	I0203 11:03:46.545049   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:03:48.529291   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:03:48.529291   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:48.529291   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:03:50.861050   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:03:50.861050   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:50.866060   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:03:50.881148   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.47 22 <nil> <nil>}
	I0203 11:03:50.881148   12544 main.go:141] libmachine: About to run SSH command:
	hostname
	I0203 11:03:51.016450   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0203 11:03:51.016544   12544 buildroot.go:166] provisioning hostname "ha-429000"
	I0203 11:03:51.016544   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:03:52.990749   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:03:52.990749   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:52.991485   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:03:55.345005   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:03:55.345005   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:55.349936   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:03:55.350347   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.47 22 <nil> <nil>}
	I0203 11:03:55.350347   12544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-429000 && echo "ha-429000" | sudo tee /etc/hostname
	I0203 11:03:55.500160   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-429000
	
	I0203 11:03:55.500297   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:03:57.457872   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:03:57.457872   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:57.457872   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:03:59.849315   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:03:59.849947   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:03:59.854073   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:03:59.854708   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.47 22 <nil> <nil>}
	I0203 11:03:59.854708   12544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-429000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-429000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-429000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 11:03:59.993140   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 11:03:59.993260   12544 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0203 11:03:59.993260   12544 buildroot.go:174] setting up certificates
	I0203 11:03:59.993260   12544 provision.go:84] configureAuth start
	I0203 11:03:59.993371   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:01.934604   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:01.934604   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:01.935504   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:04.287574   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:04.287647   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:04.287647   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:06.305410   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:06.305410   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:06.306005   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:08.654040   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:08.654040   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:08.654040   12544 provision.go:143] copyHostCerts
	I0203 11:04:08.654946   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0203 11:04:08.654946   12544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0203 11:04:08.654946   12544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0203 11:04:08.655709   12544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0203 11:04:08.656319   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0203 11:04:08.656917   12544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0203 11:04:08.656917   12544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0203 11:04:08.656917   12544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0203 11:04:08.659040   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0203 11:04:08.659040   12544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0203 11:04:08.659040   12544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0203 11:04:08.659654   12544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0203 11:04:08.661158   12544 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-429000 san=[127.0.0.1 172.25.12.47 ha-429000 localhost minikube]
	I0203 11:04:08.764668   12544 provision.go:177] copyRemoteCerts
	I0203 11:04:08.772662   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 11:04:08.772662   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:10.688102   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:10.688102   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:10.689114   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:13.041812   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:13.041812   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:13.042587   12544 sshutil.go:53] new ssh client: &{IP:172.25.12.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\id_rsa Username:docker}
	I0203 11:04:13.143562   12544 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3708495s)
	I0203 11:04:13.143562   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0203 11:04:13.143562   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0203 11:04:13.188264   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0203 11:04:13.188943   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0203 11:04:13.232749   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0203 11:04:13.232749   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0203 11:04:13.277256   12544 provision.go:87] duration metric: took 13.2837973s to configureAuth
	I0203 11:04:13.277290   12544 buildroot.go:189] setting minikube options for container-runtime
	I0203 11:04:13.277718   12544 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:04:13.277718   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:15.259940   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:15.259940   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:15.260592   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:17.580224   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:17.580224   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:17.585328   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:04:17.585328   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.47 22 <nil> <nil>}
	I0203 11:04:17.585328   12544 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 11:04:17.707918   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0203 11:04:17.707979   12544 buildroot.go:70] root file system type: tmpfs
	I0203 11:04:17.708237   12544 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 11:04:17.708318   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:19.645853   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:19.646432   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:19.646534   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:21.963841   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:21.963841   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:21.970240   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:04:21.970835   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.47 22 <nil> <nil>}
	I0203 11:04:21.970835   12544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 11:04:22.128681   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 11:04:22.128681   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:24.068711   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:24.068711   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:24.069400   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:26.413108   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:26.413345   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:26.418316   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:04:26.418972   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.47 22 <nil> <nil>}
	I0203 11:04:26.418972   12544 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 11:04:28.623313   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0203 11:04:28.623313   12544 machine.go:96] duration metric: took 42.0777802s to provisionDockerMachine
	I0203 11:04:28.623313   12544 client.go:171] duration metric: took 1m46.9261678s to LocalClient.Create
	I0203 11:04:28.623313   12544 start.go:167] duration metric: took 1m46.9262757s to libmachine.API.Create "ha-429000"
	I0203 11:04:28.623313   12544 start.go:293] postStartSetup for "ha-429000" (driver="hyperv")
	I0203 11:04:28.623313   12544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 11:04:28.632777   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 11:04:28.632777   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:30.579483   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:30.579483   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:30.579893   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:32.885239   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:32.885239   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:32.885798   12544 sshutil.go:53] new ssh client: &{IP:172.25.12.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\id_rsa Username:docker}
	I0203 11:04:32.999510   12544 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3666832s)
	I0203 11:04:33.007791   12544 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 11:04:33.014801   12544 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 11:04:33.014801   12544 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0203 11:04:33.015397   12544 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0203 11:04:33.016087   12544 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> 54522.pem in /etc/ssl/certs
	I0203 11:04:33.016087   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /etc/ssl/certs/54522.pem
	I0203 11:04:33.023748   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 11:04:33.042114   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /etc/ssl/certs/54522.pem (1708 bytes)
	I0203 11:04:33.086815   12544 start.go:296] duration metric: took 4.4634505s for postStartSetup
	I0203 11:04:33.090670   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:35.051345   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:35.051638   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:35.051638   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:37.420889   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:37.421337   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:37.421374   12544 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\config.json ...
	I0203 11:04:37.423604   12544 start.go:128] duration metric: took 1m55.7305288s to createHost
	I0203 11:04:37.423658   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:39.381519   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:39.381519   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:39.381936   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:41.694058   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:41.694058   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:41.698414   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:04:41.699075   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.47 22 <nil> <nil>}
	I0203 11:04:41.699075   12544 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0203 11:04:41.830241   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738580681.840494996
	
	I0203 11:04:41.830349   12544 fix.go:216] guest clock: 1738580681.840494996
	I0203 11:04:41.830349   12544 fix.go:229] Guest: 2025-02-03 11:04:41.840494996 +0000 UTC Remote: 2025-02-03 11:04:37.4236582 +0000 UTC m=+120.886945701 (delta=4.416836796s)
	I0203 11:04:41.830423   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:43.772530   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:43.772530   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:43.772729   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:46.139036   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:46.139085   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:46.142822   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:04:46.143263   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.47 22 <nil> <nil>}
	I0203 11:04:46.143263   12544 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1738580681
	I0203 11:04:46.288894   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb  3 11:04:41 UTC 2025
	
	I0203 11:04:46.288894   12544 fix.go:236] clock set: Mon Feb  3 11:04:41 UTC 2025
	 (err=<nil>)
	I0203 11:04:46.288894   12544 start.go:83] releasing machines lock for "ha-429000", held for 2m4.5963721s
	I0203 11:04:46.288894   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:48.238629   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:48.238629   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:48.238629   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:50.547260   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:50.547260   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:50.550750   12544 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0203 11:04:50.550830   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:50.557818   12544 ssh_runner.go:195] Run: cat /version.json
	I0203 11:04:50.557887   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:04:52.540421   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:52.540421   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:52.540421   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:52.543870   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:04:52.543870   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:52.543870   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:04:54.999994   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:54.999994   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:55.001397   12544 sshutil.go:53] new ssh client: &{IP:172.25.12.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\id_rsa Username:docker}
	I0203 11:04:55.021292   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:04:55.021292   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:04:55.022172   12544 sshutil.go:53] new ssh client: &{IP:172.25.12.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\id_rsa Username:docker}
	I0203 11:04:55.091837   12544 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.5410349s)
	W0203 11:04:55.092855   12544 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0203 11:04:55.116367   12544 ssh_runner.go:235] Completed: cat /version.json: (4.5584971s)
	I0203 11:04:55.128751   12544 ssh_runner.go:195] Run: systemctl --version
	I0203 11:04:55.149747   12544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0203 11:04:55.158387   12544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 11:04:55.170328   12544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 11:04:55.198575   12544 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0203 11:04:55.198575   12544 start.go:495] detecting cgroup driver to use...
	I0203 11:04:55.198647   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0203 11:04:55.225287   12544 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0203 11:04:55.225313   12544 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0203 11:04:55.244971   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0203 11:04:55.274205   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0203 11:04:55.297592   12544 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 11:04:55.305297   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0203 11:04:55.333341   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 11:04:55.362607   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 11:04:55.392266   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 11:04:55.422264   12544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 11:04:55.452037   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 11:04:55.480368   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0203 11:04:55.507459   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0203 11:04:55.534662   12544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 11:04:55.552956   12544 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 11:04:55.560291   12544 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0203 11:04:55.590148   12544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 11:04:55.617307   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:04:55.817223   12544 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 11:04:55.849497   12544 start.go:495] detecting cgroup driver to use...
	I0203 11:04:55.857296   12544 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 11:04:55.888999   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 11:04:55.921070   12544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 11:04:55.952902   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 11:04:55.984649   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 11:04:56.015146   12544 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0203 11:04:56.073733   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 11:04:56.097514   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 11:04:56.143949   12544 ssh_runner.go:195] Run: which cri-dockerd
	I0203 11:04:56.159232   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0203 11:04:56.176107   12544 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0203 11:04:56.216555   12544 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 11:04:56.420343   12544 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 11:04:56.612144   12544 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 11:04:56.612353   12544 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0203 11:04:56.653569   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:04:56.837834   12544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 11:04:59.416917   12544 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5790533s)
	I0203 11:04:59.425089   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0203 11:04:59.456518   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 11:04:59.491802   12544 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0203 11:04:59.672346   12544 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 11:04:59.866811   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:05:00.052443   12544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0203 11:05:00.090656   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 11:05:00.121575   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:05:00.314084   12544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0203 11:05:00.417377   12544 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0203 11:05:00.426677   12544 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0203 11:05:00.434854   12544 start.go:563] Will wait 60s for crictl version
	I0203 11:05:00.443568   12544 ssh_runner.go:195] Run: which crictl
	I0203 11:05:00.456358   12544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 11:05:00.509275   12544 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0203 11:05:00.517275   12544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 11:05:00.557081   12544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 11:05:00.592070   12544 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0203 11:05:00.592070   12544 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0203 11:05:00.596154   12544 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0203 11:05:00.596154   12544 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0203 11:05:00.596154   12544 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0203 11:05:00.596154   12544 ip.go:211] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:37:32:ac Flags:up|broadcast|multicast|running}
	I0203 11:05:00.598907   12544 ip.go:214] interface addr: fe80::c77d:5c4b:3bd9:9577/64
	I0203 11:05:00.598907   12544 ip.go:214] interface addr: 172.25.0.1/20
	I0203 11:05:00.607164   12544 ssh_runner.go:195] Run: grep 172.25.0.1	host.minikube.internal$ /etc/hosts
	I0203 11:05:00.613461   12544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:05:00.647506   12544 kubeadm.go:883] updating cluster {Name:ha-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-429000 Namespace:default APIServerHAVIP
:172.25.15.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.12.47 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0203 11:05:00.648512   12544 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 11:05:00.655164   12544 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 11:05:00.679965   12544 docker.go:689] Got preloaded images: 
	I0203 11:05:00.680022   12544 docker.go:695] registry.k8s.io/kube-apiserver:v1.32.1 wasn't preloaded
	I0203 11:05:00.689486   12544 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0203 11:05:00.715666   12544 ssh_runner.go:195] Run: which lz4
	I0203 11:05:00.722040   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0203 11:05:00.730243   12544 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0203 11:05:00.735797   12544 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0203 11:05:00.735797   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (349810983 bytes)
	I0203 11:05:02.080569   12544 docker.go:653] duration metric: took 1.3581868s to copy over tarball
	I0203 11:05:02.090447   12544 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0203 11:05:10.804136   12544 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.7135898s)
	I0203 11:05:10.804136   12544 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0203 11:05:10.864010   12544 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0203 11:05:10.881323   12544 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0203 11:05:10.923405   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:05:11.118720   12544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 11:05:14.486504   12544 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3677454s)
	I0203 11:05:14.494726   12544 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 11:05:14.522484   12544 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0203 11:05:14.522484   12544 cache_images.go:84] Images are preloaded, skipping loading
	I0203 11:05:14.522484   12544 kubeadm.go:934] updating node { 172.25.12.47 8443 v1.32.1 docker true true} ...
	I0203 11:05:14.522484   12544 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-429000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.12.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-429000 Namespace:default APIServerHAVIP:172.25.15.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0203 11:05:14.529951   12544 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0203 11:05:14.595314   12544 cni.go:84] Creating CNI manager for ""
	I0203 11:05:14.595314   12544 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0203 11:05:14.595314   12544 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0203 11:05:14.595314   12544 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.12.47 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-429000 NodeName:ha-429000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.12.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.12.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0203 11:05:14.595554   12544 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.12.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-429000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.25.12.47"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.12.47"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 11:05:14.595642   12544 kube-vip.go:115] generating kube-vip config ...
	I0203 11:05:14.603294   12544 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0203 11:05:14.632170   12544 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0203 11:05:14.632281   12544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.15.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0203 11:05:14.640644   12544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0203 11:05:14.660264   12544 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 11:05:14.667899   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0203 11:05:14.684838   12544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0203 11:05:14.714339   12544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 11:05:14.743057   12544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0203 11:05:14.771632   12544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0203 11:05:14.807164   12544 ssh_runner.go:195] Run: grep 172.25.15.254	control-plane.minikube.internal$ /etc/hosts
	I0203 11:05:14.812898   12544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.15.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:05:14.841502   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:05:15.023078   12544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:05:15.053418   12544 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000 for IP: 172.25.12.47
	I0203 11:05:15.053418   12544 certs.go:194] generating shared ca certs ...
	I0203 11:05:15.053418   12544 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:05:15.054290   12544 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0203 11:05:15.054578   12544 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0203 11:05:15.054780   12544 certs.go:256] generating profile certs ...
	I0203 11:05:15.054780   12544 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\client.key
	I0203 11:05:15.054780   12544 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\client.crt with IP's: []
	I0203 11:05:15.123746   12544 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\client.crt ...
	I0203 11:05:15.123746   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\client.crt: {Name:mk21594987226891b0c4f972f870b155c5d864cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:05:15.125805   12544 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\client.key ...
	I0203 11:05:15.125805   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\client.key: {Name:mkcf578e3dae88b14a8a464a3a8699cfe02a0a64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:05:15.126221   12544 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.8e80910f
	I0203 11:05:15.126221   12544 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.8e80910f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.12.47 172.25.15.254]
	I0203 11:05:15.287451   12544 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.8e80910f ...
	I0203 11:05:15.287451   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.8e80910f: {Name:mk54f3556c0c51c77a0cf6c7587764da5183a0ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:05:15.288610   12544 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.8e80910f ...
	I0203 11:05:15.288610   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.8e80910f: {Name:mkf3715d2b09c66d1e874f0449dfd4c304fef4f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:05:15.289817   12544 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.8e80910f -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt
	I0203 11:05:15.304355   12544 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.8e80910f -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key
	I0203 11:05:15.305428   12544 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.key
	I0203 11:05:15.305566   12544 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.crt with IP's: []
	I0203 11:05:15.865765   12544 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.crt ...
	I0203 11:05:15.865765   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.crt: {Name:mk7b154d21f2248eaa830b2d9ad69b94e0288b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:05:15.866937   12544 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.key ...
	I0203 11:05:15.866937   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.key: {Name:mke4e4b4019cc65c959d9f37f62d35a296df9db8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:05:15.868184   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0203 11:05:15.869029   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0203 11:05:15.869029   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0203 11:05:15.869029   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0203 11:05:15.869029   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0203 11:05:15.869565   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0203 11:05:15.869605   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0203 11:05:15.882424   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0203 11:05:15.882680   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem (1338 bytes)
	W0203 11:05:15.883337   12544 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452_empty.pem, impossibly tiny 0 bytes
	I0203 11:05:15.883538   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0203 11:05:15.883538   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0203 11:05:15.883538   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0203 11:05:15.884084   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0203 11:05:15.884576   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem (1708 bytes)
	I0203 11:05:15.884783   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /usr/share/ca-certificates/54522.pem
	I0203 11:05:15.884988   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:05:15.885082   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem -> /usr/share/ca-certificates/5452.pem
	I0203 11:05:15.886281   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 11:05:15.931752   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0203 11:05:15.975677   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 11:05:16.023252   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0203 11:05:16.069105   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0203 11:05:16.110991   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0203 11:05:16.149349   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 11:05:16.200916   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0203 11:05:16.246911   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /usr/share/ca-certificates/54522.pem (1708 bytes)
	I0203 11:05:16.293587   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 11:05:16.337643   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem --> /usr/share/ca-certificates/5452.pem (1338 bytes)
	I0203 11:05:16.380754   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 11:05:16.418302   12544 ssh_runner.go:195] Run: openssl version
	I0203 11:05:16.436215   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54522.pem && ln -fs /usr/share/ca-certificates/54522.pem /etc/ssl/certs/54522.pem"
	I0203 11:05:16.464712   12544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54522.pem
	I0203 11:05:16.472495   12544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:45 /usr/share/ca-certificates/54522.pem
	I0203 11:05:16.480946   12544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54522.pem
	I0203 11:05:16.497926   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/54522.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 11:05:16.526276   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 11:05:16.552913   12544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:05:16.559994   12544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:05:16.568720   12544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:05:16.585055   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 11:05:16.614247   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5452.pem && ln -fs /usr/share/ca-certificates/5452.pem /etc/ssl/certs/5452.pem"
	I0203 11:05:16.641907   12544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5452.pem
	I0203 11:05:16.649407   12544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:45 /usr/share/ca-certificates/5452.pem
	I0203 11:05:16.657451   12544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5452.pem
	I0203 11:05:16.674812   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5452.pem /etc/ssl/certs/51391683.0"
	I0203 11:05:16.703901   12544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 11:05:16.710585   12544 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0203 11:05:16.710879   12544 kubeadm.go:392] StartCluster: {Name:ha-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-429000 Namespace:default APIServerHAVIP:17
2.25.15.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.12.47 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:05:16.717461   12544 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0203 11:05:16.751642   12544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 11:05:16.783825   12544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 11:05:16.812308   12544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 11:05:16.829267   12544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 11:05:16.829267   12544 kubeadm.go:157] found existing configuration files:
	
	I0203 11:05:16.837433   12544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 11:05:16.853412   12544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 11:05:16.861702   12544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 11:05:16.888390   12544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 11:05:16.906674   12544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 11:05:16.915717   12544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 11:05:16.941516   12544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 11:05:16.958769   12544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 11:05:16.967678   12544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 11:05:16.993691   12544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 11:05:17.009296   12544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 11:05:17.017976   12544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 11:05:17.035480   12544 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0203 11:05:17.411347   12544 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 11:05:31.241346   12544 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0203 11:05:31.241469   12544 kubeadm.go:310] [preflight] Running pre-flight checks
	I0203 11:05:31.241607   12544 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 11:05:31.241849   12544 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 11:05:31.242104   12544 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0203 11:05:31.242241   12544 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 11:05:31.246026   12544 out.go:235]   - Generating certificates and keys ...
	I0203 11:05:31.246026   12544 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0203 11:05:31.247045   12544 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0203 11:05:31.247045   12544 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0203 11:05:31.247045   12544 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0203 11:05:31.247580   12544 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0203 11:05:31.247748   12544 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0203 11:05:31.247780   12544 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0203 11:05:31.247780   12544 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-429000 localhost] and IPs [172.25.12.47 127.0.0.1 ::1]
	I0203 11:05:31.247780   12544 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0203 11:05:31.248452   12544 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-429000 localhost] and IPs [172.25.12.47 127.0.0.1 ::1]
	I0203 11:05:31.248452   12544 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0203 11:05:31.248804   12544 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0203 11:05:31.248942   12544 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0203 11:05:31.249080   12544 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 11:05:31.249119   12544 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 11:05:31.249332   12544 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0203 11:05:31.249332   12544 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 11:05:31.249332   12544 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 11:05:31.249332   12544 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 11:05:31.249332   12544 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 11:05:31.249872   12544 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 11:05:31.253307   12544 out.go:235]   - Booting up control plane ...
	I0203 11:05:31.254284   12544 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 11:05:31.254546   12544 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 11:05:31.254757   12544 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 11:05:31.254903   12544 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 11:05:31.255115   12544 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 11:05:31.255115   12544 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0203 11:05:31.255402   12544 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0203 11:05:31.255676   12544 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0203 11:05:31.255859   12544 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002122061s
	I0203 11:05:31.256036   12544 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0203 11:05:31.256212   12544 kubeadm.go:310] [api-check] The API server is healthy after 7.501871328s
	I0203 11:05:31.256364   12544 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0203 11:05:31.256674   12544 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0203 11:05:31.256867   12544 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0203 11:05:31.257003   12544 kubeadm.go:310] [mark-control-plane] Marking the node ha-429000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0203 11:05:31.257322   12544 kubeadm.go:310] [bootstrap-token] Using token: 35pwxs.9cd3az0fhrerr81u
	I0203 11:05:31.259948   12544 out.go:235]   - Configuring RBAC rules ...
	I0203 11:05:31.260626   12544 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0203 11:05:31.260844   12544 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0203 11:05:31.261043   12544 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0203 11:05:31.261363   12544 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0203 11:05:31.261643   12544 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0203 11:05:31.261839   12544 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0203 11:05:31.261955   12544 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0203 11:05:31.261955   12544 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0203 11:05:31.261955   12544 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0203 11:05:31.261955   12544 kubeadm.go:310] 
	I0203 11:05:31.261955   12544 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0203 11:05:31.261955   12544 kubeadm.go:310] 
	I0203 11:05:31.262632   12544 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0203 11:05:31.262632   12544 kubeadm.go:310] 
	I0203 11:05:31.262632   12544 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0203 11:05:31.262632   12544 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0203 11:05:31.262956   12544 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0203 11:05:31.263076   12544 kubeadm.go:310] 
	I0203 11:05:31.263267   12544 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0203 11:05:31.263267   12544 kubeadm.go:310] 
	I0203 11:05:31.263377   12544 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0203 11:05:31.263377   12544 kubeadm.go:310] 
	I0203 11:05:31.263482   12544 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0203 11:05:31.263652   12544 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0203 11:05:31.263872   12544 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0203 11:05:31.263917   12544 kubeadm.go:310] 
	I0203 11:05:31.264007   12544 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0203 11:05:31.264007   12544 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0203 11:05:31.264007   12544 kubeadm.go:310] 
	I0203 11:05:31.264007   12544 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 35pwxs.9cd3az0fhrerr81u \
	I0203 11:05:31.264645   12544 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce \
	I0203 11:05:31.264690   12544 kubeadm.go:310] 	--control-plane 
	I0203 11:05:31.264690   12544 kubeadm.go:310] 
	I0203 11:05:31.264908   12544 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0203 11:05:31.264908   12544 kubeadm.go:310] 
	I0203 11:05:31.265121   12544 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 35pwxs.9cd3az0fhrerr81u \
	I0203 11:05:31.265337   12544 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce 
	I0203 11:05:31.265337   12544 cni.go:84] Creating CNI manager for ""
	I0203 11:05:31.265337   12544 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0203 11:05:31.269437   12544 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0203 11:05:31.281180   12544 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0203 11:05:31.288899   12544 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0203 11:05:31.288899   12544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0203 11:05:31.337415   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0203 11:05:31.908795   12544 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0203 11:05:31.919247   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-429000 minikube.k8s.io/updated_at=2025_02_03T11_05_31_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d minikube.k8s.io/name=ha-429000 minikube.k8s.io/primary=true
	I0203 11:05:31.919917   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:05:31.967537   12544 ops.go:34] apiserver oom_adj: -16
	I0203 11:05:32.188604   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:05:32.688247   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:05:33.189805   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:05:33.688396   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:05:34.190477   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:05:34.690001   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:05:35.188629   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 11:05:35.316485   12544 kubeadm.go:1113] duration metric: took 3.4076514s to wait for elevateKubeSystemPrivileges
	I0203 11:05:35.316485   12544 kubeadm.go:394] duration metric: took 18.6053941s to StartCluster
	I0203 11:05:35.316485   12544 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:05:35.316485   12544 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 11:05:35.318838   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:05:35.319962   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0203 11:05:35.320109   12544 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.25.12.47 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 11:05:35.320109   12544 start.go:241] waiting for startup goroutines ...
	I0203 11:05:35.320109   12544 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0203 11:05:35.320332   12544 addons.go:69] Setting storage-provisioner=true in profile "ha-429000"
	I0203 11:05:35.320332   12544 addons.go:69] Setting default-storageclass=true in profile "ha-429000"
	I0203 11:05:35.320332   12544 addons.go:238] Setting addon storage-provisioner=true in "ha-429000"
	I0203 11:05:35.320332   12544 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-429000"
	I0203 11:05:35.320437   12544 host.go:66] Checking if "ha-429000" exists ...
	I0203 11:05:35.320554   12544 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:05:35.321398   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:05:35.321732   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:05:35.482552   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0203 11:05:35.962598   12544 start.go:971] {"host.minikube.internal": 172.25.0.1} host record injected into CoreDNS's ConfigMap
	I0203 11:05:37.406737   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:05:37.406737   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:37.409403   12544 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 11:05:37.411525   12544 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 11:05:37.411525   12544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0203 11:05:37.411525   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:05:37.417806   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:05:37.417806   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:37.418706   12544 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 11:05:37.418706   12544 kapi.go:59] client config for ha-429000: &rest.Config{Host:"https://172.25.15.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-429000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-429000\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x219e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 11:05:37.420544   12544 cert_rotation.go:140] Starting client certificate rotation controller
	I0203 11:05:37.421259   12544 addons.go:238] Setting addon default-storageclass=true in "ha-429000"
	I0203 11:05:37.421259   12544 host.go:66] Checking if "ha-429000" exists ...
	I0203 11:05:37.421927   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:05:39.510253   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:05:39.510253   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:39.510253   12544 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0203 11:05:39.510253   12544 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0203 11:05:39.510253   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:05:39.555519   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:05:39.555519   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:39.555633   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:05:41.641418   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:05:41.641418   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:41.641418   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:05:42.339264   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:05:42.340282   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:42.340580   12544 sshutil.go:53] new ssh client: &{IP:172.25.12.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\id_rsa Username:docker}
	I0203 11:05:42.469873   12544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 11:05:44.088766   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:05:44.088880   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:44.088880   12544 sshutil.go:53] new ssh client: &{IP:172.25.12.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\id_rsa Username:docker}
	I0203 11:05:44.221173   12544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0203 11:05:44.428233   12544 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0203 11:05:44.428233   12544 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0203 11:05:44.429230   12544 round_trippers.go:463] GET https://172.25.15.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0203 11:05:44.429230   12544 round_trippers.go:469] Request Headers:
	I0203 11:05:44.429230   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:05:44.429230   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:05:44.442211   12544 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0203 11:05:44.443156   12544 round_trippers.go:463] PUT https://172.25.15.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0203 11:05:44.443156   12544 round_trippers.go:469] Request Headers:
	I0203 11:05:44.443215   12544 round_trippers.go:473]     Content-Type: application/json
	I0203 11:05:44.443215   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:05:44.443215   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:05:44.446920   12544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 11:05:44.450111   12544 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0203 11:05:44.452534   12544 addons.go:514] duration metric: took 9.1323212s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0203 11:05:44.452597   12544 start.go:246] waiting for cluster config update ...
	I0203 11:05:44.452597   12544 start.go:255] writing updated cluster config ...
	I0203 11:05:44.455048   12544 out.go:201] 
	I0203 11:05:44.468415   12544 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:05:44.468415   12544 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\config.json ...
	I0203 11:05:44.473413   12544 out.go:177] * Starting "ha-429000-m02" control-plane node in "ha-429000" cluster
	I0203 11:05:44.475414   12544 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 11:05:44.475414   12544 cache.go:56] Caching tarball of preloaded images
	I0203 11:05:44.475414   12544 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 11:05:44.475414   12544 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0203 11:05:44.476410   12544 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\config.json ...
	I0203 11:05:44.485409   12544 start.go:360] acquireMachinesLock for ha-429000-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 11:05:44.485409   12544 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-429000-m02"
	I0203 11:05:44.485409   12544 start.go:93] Provisioning new machine with config: &{Name:ha-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-429000 Namespace:def
ault APIServerHAVIP:172.25.15.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.12.47 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:
\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 11:05:44.485409   12544 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0203 11:05:44.488419   12544 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0203 11:05:44.488419   12544 start.go:159] libmachine.API.Create for "ha-429000" (driver="hyperv")
	I0203 11:05:44.488419   12544 client.go:168] LocalClient.Create starting
	I0203 11:05:44.489420   12544 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0203 11:05:44.489420   12544 main.go:141] libmachine: Decoding PEM data...
	I0203 11:05:44.489420   12544 main.go:141] libmachine: Parsing certificate...
	I0203 11:05:44.489420   12544 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0203 11:05:44.489420   12544 main.go:141] libmachine: Decoding PEM data...
	I0203 11:05:44.489420   12544 main.go:141] libmachine: Parsing certificate...
	I0203 11:05:44.489420   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0203 11:05:46.268227   12544 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0203 11:05:46.268227   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:46.269172   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0203 11:05:47.875039   12544 main.go:141] libmachine: [stdout =====>] : False
	
	I0203 11:05:47.875784   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:47.875784   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0203 11:05:49.264588   12544 main.go:141] libmachine: [stdout =====>] : True
	
	I0203 11:05:49.264588   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:49.265181   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0203 11:05:52.666998   12544 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0203 11:05:52.666998   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:52.668815   12544 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0203 11:05:53.079625   12544 main.go:141] libmachine: Creating SSH key...
	I0203 11:05:53.171177   12544 main.go:141] libmachine: Creating VM...
	I0203 11:05:53.172172   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0203 11:05:55.798887   12544 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0203 11:05:55.799494   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:55.799589   12544 main.go:141] libmachine: Using switch "Default Switch"
	I0203 11:05:55.799589   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0203 11:05:57.424027   12544 main.go:141] libmachine: [stdout =====>] : True
	
	I0203 11:05:57.424027   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:05:57.424844   12544 main.go:141] libmachine: Creating VHD
	I0203 11:05:57.424889   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0203 11:06:00.998300   12544 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E66ADEE4-F243-4E9B-A93D-4BA9DC2A0585
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0203 11:06:00.998300   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:00.998300   12544 main.go:141] libmachine: Writing magic tar header
	I0203 11:06:00.998415   12544 main.go:141] libmachine: Writing SSH key tar header
	I0203 11:06:01.011278   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0203 11:06:04.062923   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:06:04.062923   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:04.063724   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\disk.vhd' -SizeBytes 20000MB
	I0203 11:06:06.466753   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:06:06.466753   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:06.467817   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-429000-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0203 11:06:09.835625   12544 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-429000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0203 11:06:09.836455   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:09.836527   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-429000-m02 -DynamicMemoryEnabled $false
	I0203 11:06:11.963374   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:06:11.963426   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:11.963426   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-429000-m02 -Count 2
	I0203 11:06:14.018852   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:06:14.018852   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:14.019720   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-429000-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\boot2docker.iso'
	I0203 11:06:16.440954   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:06:16.440954   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:16.440954   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-429000-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\disk.vhd'
	I0203 11:06:18.884676   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:06:18.885271   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:18.885271   12544 main.go:141] libmachine: Starting VM...
	I0203 11:06:18.885353   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-429000-m02
	I0203 11:06:21.803470   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:06:21.804468   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:21.804468   12544 main.go:141] libmachine: Waiting for host to start...
	I0203 11:06:21.804521   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:06:23.905607   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:06:23.905607   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:23.906338   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:06:26.246229   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:06:26.246329   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:27.247298   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:06:29.250137   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:06:29.250318   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:29.250318   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:06:31.536942   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:06:31.536942   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:32.538368   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:06:34.563022   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:06:34.563964   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:34.564056   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:06:36.877834   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:06:36.877834   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:37.878527   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:06:39.889847   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:06:39.889847   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:39.889847   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:06:42.183172   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:06:42.183240   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:43.184264   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:06:45.196955   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:06:45.196955   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:45.197965   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:06:47.580757   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:06:47.580757   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:47.580757   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:06:49.547661   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:06:49.548047   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:49.548047   12544 machine.go:93] provisionDockerMachine start ...
	I0203 11:06:49.548140   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:06:51.547018   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:06:51.547018   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:51.547018   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:06:53.932273   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:06:53.932273   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:53.937350   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:06:53.950248   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.13.142 22 <nil> <nil>}
	I0203 11:06:53.950248   12544 main.go:141] libmachine: About to run SSH command:
	hostname
	I0203 11:06:54.084372   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0203 11:06:54.084372   12544 buildroot.go:166] provisioning hostname "ha-429000-m02"
	I0203 11:06:54.084372   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:06:56.056531   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:06:56.056531   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:56.056629   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:06:58.365371   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:06:58.365371   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:06:58.370098   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:06:58.370585   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.13.142 22 <nil> <nil>}
	I0203 11:06:58.370585   12544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-429000-m02 && echo "ha-429000-m02" | sudo tee /etc/hostname
	I0203 11:06:58.533727   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-429000-m02
	
	I0203 11:06:58.533843   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:00.489467   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:00.489467   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:00.489467   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:02.835448   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:02.836192   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:02.839971   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:07:02.840404   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.13.142 22 <nil> <nil>}
	I0203 11:07:02.840404   12544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-429000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-429000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-429000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 11:07:02.997122   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 11:07:02.997122   12544 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0203 11:07:02.997122   12544 buildroot.go:174] setting up certificates
	I0203 11:07:02.997122   12544 provision.go:84] configureAuth start
	I0203 11:07:02.997122   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:04.993734   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:04.993734   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:04.993950   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:07.371247   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:07.372137   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:07.372196   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:09.338927   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:09.339437   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:09.339437   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:11.706190   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:11.707195   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:11.707336   12544 provision.go:143] copyHostCerts
	I0203 11:07:11.707336   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0203 11:07:11.707336   12544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0203 11:07:11.707336   12544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0203 11:07:11.708017   12544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0203 11:07:11.708607   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0203 11:07:11.708607   12544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0203 11:07:11.708607   12544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0203 11:07:11.709222   12544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0203 11:07:11.709883   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0203 11:07:11.710061   12544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0203 11:07:11.710061   12544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0203 11:07:11.710355   12544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0203 11:07:11.711101   12544 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-429000-m02 san=[127.0.0.1 172.25.13.142 ha-429000-m02 localhost minikube]
	I0203 11:07:11.952210   12544 provision.go:177] copyRemoteCerts
	I0203 11:07:11.960728   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 11:07:11.960799   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:13.883430   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:13.883430   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:13.883430   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:16.249359   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:16.249359   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:16.249726   12544 sshutil.go:53] new ssh client: &{IP:172.25.13.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\id_rsa Username:docker}
	I0203 11:07:16.357054   12544 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3962758s)
	I0203 11:07:16.357137   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0203 11:07:16.357495   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0203 11:07:16.403693   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0203 11:07:16.403856   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0203 11:07:16.449050   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0203 11:07:16.449457   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0203 11:07:16.495449   12544 provision.go:87] duration metric: took 13.4981727s to configureAuth
	I0203 11:07:16.495449   12544 buildroot.go:189] setting minikube options for container-runtime
	I0203 11:07:16.496297   12544 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:07:16.496297   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:18.495945   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:18.496575   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:18.496761   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:20.869730   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:20.869785   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:20.873967   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:07:20.873967   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.13.142 22 <nil> <nil>}
	I0203 11:07:20.873967   12544 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 11:07:21.007325   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0203 11:07:21.007393   12544 buildroot.go:70] root file system type: tmpfs
	I0203 11:07:21.007393   12544 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 11:07:21.007393   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:22.963472   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:22.963472   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:22.964349   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:25.316955   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:25.318022   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:25.322261   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:07:25.322261   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.13.142 22 <nil> <nil>}
	I0203 11:07:25.322787   12544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.12.47"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 11:07:25.486402   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.12.47
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 11:07:25.486514   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:27.448291   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:27.448291   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:27.448381   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:29.808286   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:29.808286   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:29.813385   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:07:29.813786   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.13.142 22 <nil> <nil>}
	I0203 11:07:29.813786   12544 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 11:07:32.016735   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0203 11:07:32.016798   12544 machine.go:96] duration metric: took 42.4682663s to provisionDockerMachine
	I0203 11:07:32.016798   12544 client.go:171] duration metric: took 1m47.5271525s to LocalClient.Create
	I0203 11:07:32.016798   12544 start.go:167] duration metric: took 1m47.5271525s to libmachine.API.Create "ha-429000"
	I0203 11:07:32.016869   12544 start.go:293] postStartSetup for "ha-429000-m02" (driver="hyperv")
	I0203 11:07:32.016869   12544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 11:07:32.024681   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 11:07:32.024681   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:33.952025   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:33.952025   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:33.953006   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:36.328038   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:36.328192   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:36.328770   12544 sshutil.go:53] new ssh client: &{IP:172.25.13.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\id_rsa Username:docker}
	I0203 11:07:36.433565   12544 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4088342s)
	I0203 11:07:36.443143   12544 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 11:07:36.450301   12544 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 11:07:36.450301   12544 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0203 11:07:36.450301   12544 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0203 11:07:36.451562   12544 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> 54522.pem in /etc/ssl/certs
	I0203 11:07:36.451667   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /etc/ssl/certs/54522.pem
	I0203 11:07:36.463167   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 11:07:36.481285   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /etc/ssl/certs/54522.pem (1708 bytes)
	I0203 11:07:36.531887   12544 start.go:296] duration metric: took 4.5148747s for postStartSetup
	I0203 11:07:36.533915   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:38.496867   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:38.497376   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:38.497534   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:40.848094   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:40.848094   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:40.849108   12544 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\config.json ...
	I0203 11:07:40.851073   12544 start.go:128] duration metric: took 1m56.3643371s to createHost
	I0203 11:07:40.851179   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:42.798664   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:42.798664   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:42.798752   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:45.107269   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:45.107269   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:45.111945   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:07:45.112319   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.13.142 22 <nil> <nil>}
	I0203 11:07:45.112319   12544 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0203 11:07:45.249328   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738580865.264456433
	
	I0203 11:07:45.249328   12544 fix.go:216] guest clock: 1738580865.264456433
	I0203 11:07:45.249405   12544 fix.go:229] Guest: 2025-02-03 11:07:45.264456433 +0000 UTC Remote: 2025-02-03 11:07:40.8510736 +0000 UTC m=+304.312267301 (delta=4.413382833s)
	I0203 11:07:45.249477   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:47.197567   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:47.197567   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:47.197567   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:49.579785   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:49.579785   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:49.586783   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:07:49.587236   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.13.142 22 <nil> <nil>}
	I0203 11:07:49.587309   12544 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1738580865
	I0203 11:07:49.730869   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb  3 11:07:45 UTC 2025
	
	I0203 11:07:49.730922   12544 fix.go:236] clock set: Mon Feb  3 11:07:45 UTC 2025
	 (err=<nil>)
	I0203 11:07:49.730922   12544 start.go:83] releasing machines lock for "ha-429000-m02", held for 2m5.2440843s
	I0203 11:07:49.731097   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:51.681462   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:51.681462   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:51.681462   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:54.024421   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:54.024421   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:54.027485   12544 out.go:177] * Found network options:
	I0203 11:07:54.030787   12544 out.go:177]   - NO_PROXY=172.25.12.47
	W0203 11:07:54.033044   12544 proxy.go:119] fail to check proxy env: Error ip not in block
	I0203 11:07:54.036070   12544 out.go:177]   - NO_PROXY=172.25.12.47
	W0203 11:07:54.038068   12544 proxy.go:119] fail to check proxy env: Error ip not in block
	W0203 11:07:54.040119   12544 proxy.go:119] fail to check proxy env: Error ip not in block
	I0203 11:07:54.042352   12544 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0203 11:07:54.042500   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:54.050168   12544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0203 11:07:54.050168   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:07:56.053155   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:56.053258   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:56.053317   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:56.053317   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:07:56.053317   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:56.053317   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 11:07:58.447051   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:58.447104   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:58.447104   12544 sshutil.go:53] new ssh client: &{IP:172.25.13.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\id_rsa Username:docker}
	I0203 11:07:58.465429   12544 main.go:141] libmachine: [stdout =====>] : 172.25.13.142
	
	I0203 11:07:58.465429   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:07:58.465429   12544 sshutil.go:53] new ssh client: &{IP:172.25.13.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m02\id_rsa Username:docker}
	I0203 11:07:58.557101   12544 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5068061s)
	W0203 11:07:58.557184   12544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 11:07:58.565319   12544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 11:07:58.567403   12544 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.5249998s)
	W0203 11:07:58.567403   12544 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0203 11:07:58.593658   12544 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0203 11:07:58.593658   12544 start.go:495] detecting cgroup driver to use...
	I0203 11:07:58.593963   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 11:07:58.642037   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0203 11:07:58.667727   12544 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0203 11:07:58.667727   12544 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0203 11:07:58.668727   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0203 11:07:58.693518   12544 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 11:07:58.701095   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0203 11:07:58.728210   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 11:07:58.756282   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 11:07:58.783590   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 11:07:58.811673   12544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 11:07:58.838749   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 11:07:58.866041   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0203 11:07:58.893910   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0203 11:07:58.921814   12544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 11:07:58.938925   12544 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 11:07:58.947564   12544 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0203 11:07:58.978806   12544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 11:07:59.003249   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:07:59.195975   12544 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 11:07:59.227982   12544 start.go:495] detecting cgroup driver to use...
	I0203 11:07:59.237133   12544 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 11:07:59.266974   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 11:07:59.301437   12544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 11:07:59.334862   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 11:07:59.368201   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 11:07:59.400062   12544 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0203 11:07:59.462702   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 11:07:59.491052   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 11:07:59.534019   12544 ssh_runner.go:195] Run: which cri-dockerd
	I0203 11:07:59.551818   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0203 11:07:59.570195   12544 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0203 11:07:59.612847   12544 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 11:07:59.797009   12544 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 11:07:59.970334   12544 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 11:07:59.970334   12544 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0203 11:08:00.011940   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:08:00.207325   12544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 11:08:02.797686   12544 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5903313s)
	I0203 11:08:02.806053   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0203 11:08:02.837050   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 11:08:02.867054   12544 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0203 11:08:03.059012   12544 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 11:08:03.247004   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:08:03.444558   12544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0203 11:08:03.483681   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 11:08:03.515443   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:08:03.709902   12544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0203 11:08:03.816150   12544 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0203 11:08:03.823468   12544 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0203 11:08:03.832753   12544 start.go:563] Will wait 60s for crictl version
	I0203 11:08:03.841213   12544 ssh_runner.go:195] Run: which crictl
	I0203 11:08:03.854128   12544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 11:08:03.903145   12544 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0203 11:08:03.910121   12544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 11:08:03.952117   12544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 11:08:03.990898   12544 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0203 11:08:03.993580   12544 out.go:177]   - env NO_PROXY=172.25.12.47
	I0203 11:08:03.997644   12544 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0203 11:08:04.001233   12544 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0203 11:08:04.001233   12544 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0203 11:08:04.001233   12544 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0203 11:08:04.001754   12544 ip.go:211] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:37:32:ac Flags:up|broadcast|multicast|running}
	I0203 11:08:04.004033   12544 ip.go:214] interface addr: fe80::c77d:5c4b:3bd9:9577/64
	I0203 11:08:04.004033   12544 ip.go:214] interface addr: 172.25.0.1/20
	I0203 11:08:04.011029   12544 ssh_runner.go:195] Run: grep 172.25.0.1	host.minikube.internal$ /etc/hosts
	I0203 11:08:04.017054   12544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:08:04.039130   12544 mustload.go:65] Loading cluster: ha-429000
	I0203 11:08:04.039414   12544 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:08:04.040065   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:08:06.028951   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:08:06.029112   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:08:06.029112   12544 host.go:66] Checking if "ha-429000" exists ...
	I0203 11:08:06.029833   12544 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000 for IP: 172.25.13.142
	I0203 11:08:06.029833   12544 certs.go:194] generating shared ca certs ...
	I0203 11:08:06.029833   12544 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:08:06.030349   12544 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0203 11:08:06.030610   12544 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0203 11:08:06.030610   12544 certs.go:256] generating profile certs ...
	I0203 11:08:06.031234   12544 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\client.key
	I0203 11:08:06.031347   12544 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.579642fa
	I0203 11:08:06.031505   12544 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.579642fa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.12.47 172.25.13.142 172.25.15.254]
	I0203 11:08:06.211013   12544 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.579642fa ...
	I0203 11:08:06.211013   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.579642fa: {Name:mk49e737d3682f472190d3b64ef4f7e34ffb5ac8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:08:06.212020   12544 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.579642fa ...
	I0203 11:08:06.212020   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.579642fa: {Name:mkf4bcf3e40665551dd559d734fad4d6a11f8ab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:08:06.213021   12544 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.579642fa -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt
	I0203 11:08:06.229255   12544 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.579642fa -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key
	I0203 11:08:06.230200   12544 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.key
	I0203 11:08:06.230200   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0203 11:08:06.230200   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0203 11:08:06.230200   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0203 11:08:06.230200   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0203 11:08:06.230200   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0203 11:08:06.230200   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0203 11:08:06.231447   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0203 11:08:06.231567   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0203 11:08:06.231751   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem (1338 bytes)
	W0203 11:08:06.231751   12544 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452_empty.pem, impossibly tiny 0 bytes
	I0203 11:08:06.231751   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0203 11:08:06.232334   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0203 11:08:06.232334   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0203 11:08:06.232334   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0203 11:08:06.232941   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem (1708 bytes)
	I0203 11:08:06.232992   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem -> /usr/share/ca-certificates/5452.pem
	I0203 11:08:06.232992   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /usr/share/ca-certificates/54522.pem
	I0203 11:08:06.232992   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:08:06.232992   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:08:08.173759   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:08:08.173759   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:08:08.173851   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:08:10.506259   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:08:10.506259   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:08:10.509933   12544 sshutil.go:53] new ssh client: &{IP:172.25.12.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\id_rsa Username:docker}
	I0203 11:08:10.615797   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0203 11:08:10.624564   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0203 11:08:10.651460   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0203 11:08:10.657788   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0203 11:08:10.687374   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0203 11:08:10.694707   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0203 11:08:10.727906   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0203 11:08:10.734514   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0203 11:08:10.768974   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0203 11:08:10.775832   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0203 11:08:10.804704   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0203 11:08:10.811887   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0203 11:08:10.831437   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 11:08:10.878274   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0203 11:08:10.926494   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 11:08:10.977012   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0203 11:08:11.020231   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0203 11:08:11.065107   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0203 11:08:11.111806   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 11:08:11.156926   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0203 11:08:11.202611   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem --> /usr/share/ca-certificates/5452.pem (1338 bytes)
	I0203 11:08:11.247297   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /usr/share/ca-certificates/54522.pem (1708 bytes)
	I0203 11:08:11.290015   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 11:08:11.332513   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0203 11:08:11.361800   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0203 11:08:11.391668   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0203 11:08:11.420466   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0203 11:08:11.450675   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0203 11:08:11.479542   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0203 11:08:11.509765   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0203 11:08:11.549139   12544 ssh_runner.go:195] Run: openssl version
	I0203 11:08:11.566518   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5452.pem && ln -fs /usr/share/ca-certificates/5452.pem /etc/ssl/certs/5452.pem"
	I0203 11:08:11.592577   12544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5452.pem
	I0203 11:08:11.599741   12544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:45 /usr/share/ca-certificates/5452.pem
	I0203 11:08:11.608854   12544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5452.pem
	I0203 11:08:11.625840   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5452.pem /etc/ssl/certs/51391683.0"
	I0203 11:08:11.654495   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54522.pem && ln -fs /usr/share/ca-certificates/54522.pem /etc/ssl/certs/54522.pem"
	I0203 11:08:11.682524   12544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54522.pem
	I0203 11:08:11.689386   12544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:45 /usr/share/ca-certificates/54522.pem
	I0203 11:08:11.698542   12544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54522.pem
	I0203 11:08:11.715902   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/54522.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 11:08:11.743475   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 11:08:11.772598   12544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:08:11.779579   12544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:08:11.787380   12544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:08:11.807036   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 11:08:11.843230   12544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 11:08:11.852951   12544 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0203 11:08:11.853079   12544 kubeadm.go:934] updating node {m02 172.25.13.142 8443 v1.32.1 docker true true} ...
	I0203 11:08:11.853079   12544 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-429000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.13.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-429000 Namespace:default APIServerHAVIP:172.25.15.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0203 11:08:11.853079   12544 kube-vip.go:115] generating kube-vip config ...
	I0203 11:08:11.860986   12544 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0203 11:08:11.891732   12544 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0203 11:08:11.891732   12544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.15.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0203 11:08:11.900561   12544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0203 11:08:11.919500   12544 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.1': No such file or directory
	
	Initiating transfer...
	I0203 11:08:11.927755   12544 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.1
	I0203 11:08:11.948610   12544 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl
	I0203 11:08:11.948680   12544 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet
	I0203 11:08:11.948680   12544 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm
	I0203 11:08:13.017768   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl -> /var/lib/minikube/binaries/v1.32.1/kubectl
	I0203 11:08:13.027840   12544 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl
	I0203 11:08:13.033886   12544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubectl': No such file or directory
	I0203 11:08:13.033886   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl --> /var/lib/minikube/binaries/v1.32.1/kubectl (57323672 bytes)
	I0203 11:08:13.091915   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm -> /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0203 11:08:13.099901   12544 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0203 11:08:13.171977   12544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubeadm': No such file or directory
	I0203 11:08:13.172155   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm --> /var/lib/minikube/binaries/v1.32.1/kubeadm (70942872 bytes)
	I0203 11:08:13.636731   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:08:13.692751   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet -> /var/lib/minikube/binaries/v1.32.1/kubelet
	I0203 11:08:13.700750   12544 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet
	I0203 11:08:13.722865   12544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubelet': No such file or directory
	I0203 11:08:13.723025   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet --> /var/lib/minikube/binaries/v1.32.1/kubelet (77398276 bytes)
	I0203 11:08:14.228469   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0203 11:08:14.247548   12544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0203 11:08:14.280195   12544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 11:08:14.310962   12544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0203 11:08:14.349516   12544 ssh_runner.go:195] Run: grep 172.25.15.254	control-plane.minikube.internal$ /etc/hosts
	I0203 11:08:14.356059   12544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.15.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:08:14.386740   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:08:14.580038   12544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:08:14.614328   12544 host.go:66] Checking if "ha-429000" exists ...
	I0203 11:08:14.615277   12544 start.go:317] joinCluster: &{Name:ha-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-429000 Namespace:default APIServerHAVIP:172.
25.15.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.12.47 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.13.142 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenk
ins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:08:14.615511   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0203 11:08:14.615626   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:08:16.573309   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:08:16.573390   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:08:16.573476   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:08:18.925157   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:08:18.925157   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:08:18.925855   12544 sshutil.go:53] new ssh client: &{IP:172.25.12.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\id_rsa Username:docker}
	I0203 11:08:19.317985   12544 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.7023148s)
	I0203 11:08:19.317985   12544 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.25.13.142 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 11:08:19.317985   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xi17re.n6bazw697qvc86yk --discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-429000-m02 --control-plane --apiserver-advertise-address=172.25.13.142 --apiserver-bind-port=8443"
	I0203 11:08:59.562181   12544 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xi17re.n6bazw697qvc86yk --discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-429000-m02 --control-plane --apiserver-advertise-address=172.25.13.142 --apiserver-bind-port=8443": (40.2437373s)
	I0203 11:08:59.562181   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0203 11:09:00.303269   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-429000-m02 minikube.k8s.io/updated_at=2025_02_03T11_09_00_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d minikube.k8s.io/name=ha-429000 minikube.k8s.io/primary=false
	I0203 11:09:00.463893   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-429000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0203 11:09:00.612956   12544 start.go:319] duration metric: took 45.9971639s to joinCluster
	I0203 11:09:00.613149   12544 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.25.13.142 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 11:09:00.613672   12544 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:09:00.615311   12544 out.go:177] * Verifying Kubernetes components...
	I0203 11:09:00.626684   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:09:00.951656   12544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:09:00.976442   12544 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 11:09:00.976759   12544 kapi.go:59] client config for ha-429000: &rest.Config{Host:"https://172.25.15.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-429000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-429000\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x219e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0203 11:09:00.976759   12544 kubeadm.go:483] Overriding stale ClientConfig host https://172.25.15.254:8443 with https://172.25.12.47:8443
	I0203 11:09:00.977330   12544 node_ready.go:35] waiting up to 6m0s for node "ha-429000-m02" to be "Ready" ...
	I0203 11:09:00.977942   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:00.977942   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:00.978001   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:00.978001   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:01.012197   12544 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0203 11:09:01.477609   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:01.477609   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:01.477609   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:01.477609   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:01.484490   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:09:01.977451   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:01.977451   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:01.977451   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:01.977451   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:01.982874   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:02.478303   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:02.478303   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:02.478303   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:02.478303   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:02.483307   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:02.978527   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:02.978527   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:02.978527   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:02.978527   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:02.983875   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:02.985352   12544 node_ready.go:53] node "ha-429000-m02" has status "Ready":"False"
	I0203 11:09:03.477810   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:03.477810   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:03.477810   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:03.477810   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:03.482919   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:03.977582   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:03.977582   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:03.977582   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:03.977582   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:03.982585   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:04.479202   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:04.479202   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:04.479202   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:04.479202   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:04.585675   12544 round_trippers.go:574] Response Status: 200 OK in 106 milliseconds
	I0203 11:09:04.978034   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:04.978034   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:04.978034   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:04.978034   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:04.982038   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:05.478067   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:05.478067   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:05.478067   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:05.478067   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:05.483520   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:05.484071   12544 node_ready.go:53] node "ha-429000-m02" has status "Ready":"False"
	I0203 11:09:05.978365   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:05.978365   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:05.978365   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:05.978365   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:05.983356   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:06.478615   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:06.478615   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:06.478615   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:06.478615   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:06.777587   12544 round_trippers.go:574] Response Status: 200 OK in 298 milliseconds
	I0203 11:09:06.978982   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:06.978982   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:06.978982   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:06.978982   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:06.985357   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:09:07.477596   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:07.477596   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:07.477596   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:07.477596   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:07.500007   12544 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0203 11:09:07.501094   12544 node_ready.go:53] node "ha-429000-m02" has status "Ready":"False"
	I0203 11:09:07.977556   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:07.977556   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:07.977556   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:07.977556   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:07.983217   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:08.477799   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:08.477799   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:08.477799   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:08.477799   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:08.484171   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:09:08.977721   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:08.978145   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:08.978145   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:08.978145   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:08.984619   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:09.477927   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:09.477927   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:09.477927   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:09.477927   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:09.484987   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:09:09.977537   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:09.977537   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:09.977537   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:09.977537   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:09.983291   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:09.984139   12544 node_ready.go:53] node "ha-429000-m02" has status "Ready":"False"
	I0203 11:09:10.478519   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:10.478519   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:10.478519   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:10.478519   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:10.484455   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:10.978790   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:10.978790   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:10.978790   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:10.978790   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:10.984182   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:11.478120   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:11.478120   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:11.478120   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:11.478120   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:11.482246   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:11.978503   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:11.978571   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:11.978571   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:11.978571   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:11.983849   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:11.985708   12544 node_ready.go:53] node "ha-429000-m02" has status "Ready":"False"
	I0203 11:09:12.478333   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:12.478333   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:12.478333   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:12.478333   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:12.483669   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:12.978840   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:12.978840   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:12.978840   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:12.978840   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:12.983431   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:13.478161   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:13.478280   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:13.478280   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:13.478280   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:13.481913   12544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 11:09:13.978178   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:13.978178   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:13.978178   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:13.978178   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:13.984477   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:09:14.477979   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:14.477979   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:14.477979   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:14.477979   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:14.482714   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:14.483772   12544 node_ready.go:53] node "ha-429000-m02" has status "Ready":"False"
	I0203 11:09:14.978423   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:14.978423   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:14.978423   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:14.978423   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:14.988814   12544 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0203 11:09:15.478074   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:15.478074   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:15.478074   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:15.478074   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:15.484085   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:09:15.978467   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:15.978467   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:15.978467   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:15.978467   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:15.984069   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:16.478028   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:16.478028   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:16.478028   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:16.478028   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:16.483811   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:16.484586   12544 node_ready.go:53] node "ha-429000-m02" has status "Ready":"False"
	I0203 11:09:16.978002   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:16.978474   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:16.978545   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:16.978545   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:16.983820   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:17.478312   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:17.478312   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:17.478312   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:17.478312   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:17.483577   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:17.977996   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:17.977996   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:17.977996   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:17.977996   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:17.982906   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:18.478378   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:18.478378   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:18.478588   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:18.478588   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:18.484225   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:18.485182   12544 node_ready.go:53] node "ha-429000-m02" has status "Ready":"False"
	I0203 11:09:18.978535   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:18.978535   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:18.978535   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:18.978535   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:18.984135   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:18.984874   12544 node_ready.go:49] node "ha-429000-m02" has status "Ready":"True"
	I0203 11:09:18.984874   12544 node_ready.go:38] duration metric: took 18.0073382s for node "ha-429000-m02" to be "Ready" ...
	I0203 11:09:18.984951   12544 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 11:09:18.985085   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods
	I0203 11:09:18.985085   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:18.985151   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:18.985151   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:18.994876   12544 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0203 11:09:19.003718   12544 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-5jzvf" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.003718   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-5jzvf
	I0203 11:09:19.003718   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.003718   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.003718   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.008673   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:19.010275   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:09:19.010275   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.010275   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.010275   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.017296   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:09:19.017296   12544 pod_ready.go:93] pod "coredns-668d6bf9bc-5jzvf" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:19.017296   12544 pod_ready.go:82] duration metric: took 13.5772ms for pod "coredns-668d6bf9bc-5jzvf" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.017296   12544 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-r5pf5" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.018045   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-r5pf5
	I0203 11:09:19.018105   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.018105   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.018105   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.021834   12544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 11:09:19.022979   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:09:19.023034   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.023034   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.023034   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.026175   12544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 11:09:19.027185   12544 pod_ready.go:93] pod "coredns-668d6bf9bc-r5pf5" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:19.027185   12544 pod_ready.go:82] duration metric: took 9.8892ms for pod "coredns-668d6bf9bc-r5pf5" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.027185   12544 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.027264   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-429000
	I0203 11:09:19.027264   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.027264   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.027264   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.031133   12544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 11:09:19.031736   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:09:19.031736   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.031736   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.031806   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.035368   12544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 11:09:19.036120   12544 pod_ready.go:93] pod "etcd-ha-429000" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:19.036305   12544 pod_ready.go:82] duration metric: took 9.1195ms for pod "etcd-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.036344   12544 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.036533   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-429000-m02
	I0203 11:09:19.036878   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.036878   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.036878   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.046022   12544 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0203 11:09:19.046920   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:19.046950   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.046950   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.046990   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.050672   12544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 11:09:19.050672   12544 pod_ready.go:93] pod "etcd-ha-429000-m02" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:19.050672   12544 pod_ready.go:82] duration metric: took 14.3279ms for pod "etcd-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.050672   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.180221   12544 request.go:632] Waited for 129.5475ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-429000
	I0203 11:09:19.180448   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-429000
	I0203 11:09:19.180448   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.180448   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.180448   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.185731   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:19.379390   12544 request.go:632] Waited for 193.1917ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:09:19.379792   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:09:19.379792   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.379792   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.379792   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.385025   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:19.385627   12544 pod_ready.go:93] pod "kube-apiserver-ha-429000" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:19.385627   12544 pod_ready.go:82] duration metric: took 334.9511ms for pod "kube-apiserver-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.385719   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.579161   12544 request.go:632] Waited for 193.4399ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-429000-m02
	I0203 11:09:19.579161   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-429000-m02
	I0203 11:09:19.579161   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.579161   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.579161   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.584594   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:19.779333   12544 request.go:632] Waited for 193.733ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:19.779333   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:19.779333   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.779333   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.779333   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.786277   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:09:19.787082   12544 pod_ready.go:93] pod "kube-apiserver-ha-429000-m02" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:19.787156   12544 pod_ready.go:82] duration metric: took 401.4324ms for pod "kube-apiserver-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.787156   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:19.979168   12544 request.go:632] Waited for 191.9305ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-429000
	I0203 11:09:19.979168   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-429000
	I0203 11:09:19.979168   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:19.979168   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:19.979696   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:19.990469   12544 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0203 11:09:20.179276   12544 request.go:632] Waited for 187.9792ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:09:20.179569   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:09:20.179569   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:20.179569   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:20.179569   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:20.183876   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:20.185014   12544 pod_ready.go:93] pod "kube-controller-manager-ha-429000" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:20.185014   12544 pod_ready.go:82] duration metric: took 397.8537ms for pod "kube-controller-manager-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:20.185014   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:20.379406   12544 request.go:632] Waited for 194.2749ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-429000-m02
	I0203 11:09:20.379406   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-429000-m02
	I0203 11:09:20.379815   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:20.379815   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:20.379815   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:20.384424   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:20.578586   12544 request.go:632] Waited for 192.8918ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:20.578892   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:20.578892   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:20.578892   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:20.578892   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:20.588053   12544 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0203 11:09:20.589019   12544 pod_ready.go:93] pod "kube-controller-manager-ha-429000-m02" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:20.589086   12544 pod_ready.go:82] duration metric: took 404.0669ms for pod "kube-controller-manager-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:20.589086   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2n5cz" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:20.779020   12544 request.go:632] Waited for 189.8645ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2n5cz
	I0203 11:09:20.779327   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2n5cz
	I0203 11:09:20.779376   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:20.779376   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:20.779376   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:20.785463   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:09:20.979641   12544 request.go:632] Waited for 192.9365ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:20.980068   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:20.980120   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:20.980157   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:20.980157   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:20.987428   12544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 11:09:20.987428   12544 pod_ready.go:93] pod "kube-proxy-2n5cz" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:20.987428   12544 pod_ready.go:82] duration metric: took 398.3373ms for pod "kube-proxy-2n5cz" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:20.987428   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dhm6z" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:21.180890   12544 request.go:632] Waited for 193.4599ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dhm6z
	I0203 11:09:21.180890   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dhm6z
	I0203 11:09:21.180890   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:21.180890   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:21.180890   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:21.185018   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:09:21.378798   12544 request.go:632] Waited for 191.8209ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:09:21.378798   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:09:21.378798   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:21.378798   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:21.378798   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:21.383967   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:21.384593   12544 pod_ready.go:93] pod "kube-proxy-dhm6z" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:21.384658   12544 pod_ready.go:82] duration metric: took 397.2254ms for pod "kube-proxy-dhm6z" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:21.384658   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:21.578995   12544 request.go:632] Waited for 194.2189ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-429000
	I0203 11:09:21.578995   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-429000
	I0203 11:09:21.578995   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:21.578995   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:21.578995   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:21.584770   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:21.779527   12544 request.go:632] Waited for 194.0532ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:09:21.779527   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:09:21.779527   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:21.779527   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:21.779527   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:21.784744   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:21.785422   12544 pod_ready.go:93] pod "kube-scheduler-ha-429000" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:21.785422   12544 pod_ready.go:82] duration metric: took 400.7595ms for pod "kube-scheduler-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:21.785486   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:21.979297   12544 request.go:632] Waited for 193.7445ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-429000-m02
	I0203 11:09:21.979297   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-429000-m02
	I0203 11:09:21.979632   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:21.979632   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:21.979632   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:21.988347   12544 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0203 11:09:22.179047   12544 request.go:632] Waited for 189.9705ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:22.179047   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:09:22.179047   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:22.179047   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:22.179047   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:22.184463   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:22.185447   12544 pod_ready.go:93] pod "kube-scheduler-ha-429000-m02" in "kube-system" namespace has status "Ready":"True"
	I0203 11:09:22.185447   12544 pod_ready.go:82] duration metric: took 399.9566ms for pod "kube-scheduler-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:09:22.185447   12544 pod_ready.go:39] duration metric: took 3.2004594s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 11:09:22.185585   12544 api_server.go:52] waiting for apiserver process to appear ...
	I0203 11:09:22.193496   12544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:09:22.220750   12544 api_server.go:72] duration metric: took 21.6072861s to wait for apiserver process to appear ...
	I0203 11:09:22.220905   12544 api_server.go:88] waiting for apiserver healthz status ...
	I0203 11:09:22.220905   12544 api_server.go:253] Checking apiserver healthz at https://172.25.12.47:8443/healthz ...
	I0203 11:09:22.234051   12544 api_server.go:279] https://172.25.12.47:8443/healthz returned 200:
	ok
	I0203 11:09:22.234146   12544 round_trippers.go:463] GET https://172.25.12.47:8443/version
	I0203 11:09:22.234146   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:22.234146   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:22.234146   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:22.235747   12544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0203 11:09:22.236106   12544 api_server.go:141] control plane version: v1.32.1
	I0203 11:09:22.236136   12544 api_server.go:131] duration metric: took 15.2313ms to wait for apiserver health ...
	I0203 11:09:22.236136   12544 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 11:09:22.378779   12544 request.go:632] Waited for 142.587ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods
	I0203 11:09:22.379206   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods
	I0203 11:09:22.379296   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:22.379296   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:22.379296   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:22.386652   12544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 11:09:22.392471   12544 system_pods.go:59] 17 kube-system pods found
	I0203 11:09:22.393006   12544 system_pods.go:61] "coredns-668d6bf9bc-5jzvf" [171e3213-b687-432a-b3a3-231392dddfaf] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "coredns-668d6bf9bc-r5pf5" [34df0b8e-1ae4-4e3e-a39f-9d9c505a25c4] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "etcd-ha-429000" [8462336e-5775-446f-99ed-d5a46d8f85b0] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "etcd-ha-429000-m02" [26a3c348-6476-41c8-b1f0-b2d86f3b77a2] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kindnet-d7lbp" [23d86f41-7e30-4da8-924f-4c6aafb9360c] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kindnet-fv8r6" [58d47479-d8ac-4a8a-b5d7-7fc71319598b] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kube-apiserver-ha-429000" [a77b61c0-ca5b-4bf0-a0df-a3f7465c7cfc] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kube-apiserver-ha-429000-m02" [e3df904b-ddb6-4c43-9bd8-c35136520494] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kube-controller-manager-ha-429000" [df6cfc76-d0b4-4461-aa2e-cd44ebaec04a] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kube-controller-manager-ha-429000-m02" [89e18813-ac30-4890-a036-b86f0a9a513f] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kube-proxy-2n5cz" [aa6ffe60-2b46-473c-b2c4-b45004c6aeeb] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kube-proxy-dhm6z" [a2f4caab-ad59-402c-b3c8-3da356385c89] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kube-scheduler-ha-429000" [997f2cf9-4a89-40cd-9d8b-fece398c4a10] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kube-scheduler-ha-429000-m02" [e619bf3e-cb81-41a0-bfa8-c9f6506a356e] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kube-vip-ha-429000" [4907d066-bd93-4786-a868-9f3bd0a51f4b] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "kube-vip-ha-429000-m02" [a53c671d-cc58-4505-901b-fe00af1f8eaa] Running
	I0203 11:09:22.393006   12544 system_pods.go:61] "storage-provisioner" [9cea8ac0-e49e-4a9b-8e99-2da32218657c] Running
	I0203 11:09:22.393006   12544 system_pods.go:74] duration metric: took 156.814ms to wait for pod list to return data ...
	I0203 11:09:22.393006   12544 default_sa.go:34] waiting for default service account to be created ...
	I0203 11:09:22.579267   12544 request.go:632] Waited for 186.1162ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/default/serviceaccounts
	I0203 11:09:22.579267   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/default/serviceaccounts
	I0203 11:09:22.579267   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:22.579267   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:22.579267   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:22.585682   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:09:22.585961   12544 default_sa.go:45] found service account: "default"
	I0203 11:09:22.585961   12544 default_sa.go:55] duration metric: took 192.9529ms for default service account to be created ...
	I0203 11:09:22.585961   12544 system_pods.go:116] waiting for k8s-apps to be running ...
	I0203 11:09:22.779593   12544 request.go:632] Waited for 193.5286ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods
	I0203 11:09:22.779593   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods
	I0203 11:09:22.779593   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:22.779593   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:22.779593   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:22.787485   12544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 11:09:22.793721   12544 system_pods.go:86] 17 kube-system pods found
	I0203 11:09:22.793721   12544 system_pods.go:89] "coredns-668d6bf9bc-5jzvf" [171e3213-b687-432a-b3a3-231392dddfaf] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "coredns-668d6bf9bc-r5pf5" [34df0b8e-1ae4-4e3e-a39f-9d9c505a25c4] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "etcd-ha-429000" [8462336e-5775-446f-99ed-d5a46d8f85b0] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "etcd-ha-429000-m02" [26a3c348-6476-41c8-b1f0-b2d86f3b77a2] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kindnet-d7lbp" [23d86f41-7e30-4da8-924f-4c6aafb9360c] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kindnet-fv8r6" [58d47479-d8ac-4a8a-b5d7-7fc71319598b] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kube-apiserver-ha-429000" [a77b61c0-ca5b-4bf0-a0df-a3f7465c7cfc] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kube-apiserver-ha-429000-m02" [e3df904b-ddb6-4c43-9bd8-c35136520494] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kube-controller-manager-ha-429000" [df6cfc76-d0b4-4461-aa2e-cd44ebaec04a] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kube-controller-manager-ha-429000-m02" [89e18813-ac30-4890-a036-b86f0a9a513f] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kube-proxy-2n5cz" [aa6ffe60-2b46-473c-b2c4-b45004c6aeeb] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kube-proxy-dhm6z" [a2f4caab-ad59-402c-b3c8-3da356385c89] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kube-scheduler-ha-429000" [997f2cf9-4a89-40cd-9d8b-fece398c4a10] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kube-scheduler-ha-429000-m02" [e619bf3e-cb81-41a0-bfa8-c9f6506a356e] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kube-vip-ha-429000" [4907d066-bd93-4786-a868-9f3bd0a51f4b] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "kube-vip-ha-429000-m02" [a53c671d-cc58-4505-901b-fe00af1f8eaa] Running
	I0203 11:09:22.793721   12544 system_pods.go:89] "storage-provisioner" [9cea8ac0-e49e-4a9b-8e99-2da32218657c] Running
	I0203 11:09:22.793721   12544 system_pods.go:126] duration metric: took 207.7582ms to wait for k8s-apps to be running ...
	I0203 11:09:22.793721   12544 system_svc.go:44] waiting for kubelet service to be running ....
	I0203 11:09:22.801544   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:09:22.829400   12544 system_svc.go:56] duration metric: took 35.6783ms WaitForService to wait for kubelet
	I0203 11:09:22.829400   12544 kubeadm.go:582] duration metric: took 22.2159289s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 11:09:22.829400   12544 node_conditions.go:102] verifying NodePressure condition ...
	I0203 11:09:22.979912   12544 request.go:632] Waited for 150.51ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes
	I0203 11:09:22.980126   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes
	I0203 11:09:22.980126   12544 round_trippers.go:469] Request Headers:
	I0203 11:09:22.980126   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:09:22.980126   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:09:22.991325   12544 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0203 11:09:22.992679   12544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 11:09:22.992795   12544 node_conditions.go:123] node cpu capacity is 2
	I0203 11:09:22.992866   12544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 11:09:22.992866   12544 node_conditions.go:123] node cpu capacity is 2
	I0203 11:09:22.992899   12544 node_conditions.go:105] duration metric: took 163.4965ms to run NodePressure ...
	I0203 11:09:22.992899   12544 start.go:241] waiting for startup goroutines ...
	I0203 11:09:22.992957   12544 start.go:255] writing updated cluster config ...
	I0203 11:09:22.996848   12544 out.go:201] 
	I0203 11:09:23.016629   12544 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:09:23.016864   12544 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\config.json ...
	I0203 11:09:23.023493   12544 out.go:177] * Starting "ha-429000-m03" control-plane node in "ha-429000" cluster
	I0203 11:09:23.025476   12544 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 11:09:23.025476   12544 cache.go:56] Caching tarball of preloaded images
	I0203 11:09:23.025476   12544 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 11:09:23.026470   12544 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0203 11:09:23.026470   12544 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\config.json ...
	I0203 11:09:23.036680   12544 start.go:360] acquireMachinesLock for ha-429000-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 11:09:23.036680   12544 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-429000-m03"
	I0203 11:09:23.037495   12544 start.go:93] Provisioning new machine with config: &{Name:ha-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-429000 Namespace:def
ault APIServerHAVIP:172.25.15.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.12.47 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.13.142 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio
:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 11:09:23.037526   12544 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0203 11:09:23.041814   12544 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0203 11:09:23.042595   12544 start.go:159] libmachine.API.Create for "ha-429000" (driver="hyperv")
	I0203 11:09:23.042595   12544 client.go:168] LocalClient.Create starting
	I0203 11:09:23.042781   12544 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0203 11:09:23.043221   12544 main.go:141] libmachine: Decoding PEM data...
	I0203 11:09:23.043221   12544 main.go:141] libmachine: Parsing certificate...
	I0203 11:09:23.043423   12544 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0203 11:09:23.043584   12544 main.go:141] libmachine: Decoding PEM data...
	I0203 11:09:23.043584   12544 main.go:141] libmachine: Parsing certificate...
	I0203 11:09:23.043584   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0203 11:09:24.825290   12544 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0203 11:09:24.825290   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:24.825290   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0203 11:09:26.465219   12544 main.go:141] libmachine: [stdout =====>] : False
	
	I0203 11:09:26.465478   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:26.465556   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0203 11:09:27.856261   12544 main.go:141] libmachine: [stdout =====>] : True
	
	I0203 11:09:27.856849   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:27.856947   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0203 11:09:31.267585   12544 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0203 11:09:31.267585   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:31.269513   12544 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0203 11:09:31.644314   12544 main.go:141] libmachine: Creating SSH key...
	I0203 11:09:31.905532   12544 main.go:141] libmachine: Creating VM...
	I0203 11:09:31.905532   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0203 11:09:34.614001   12544 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0203 11:09:34.614083   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:34.614158   12544 main.go:141] libmachine: Using switch "Default Switch"
	I0203 11:09:34.614246   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0203 11:09:36.267265   12544 main.go:141] libmachine: [stdout =====>] : True
	
	I0203 11:09:36.267265   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:36.267573   12544 main.go:141] libmachine: Creating VHD
	I0203 11:09:36.267573   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0203 11:09:39.930156   12544 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : CF8BF7F8-7682-4EB8-9A66-97DF1B7993F6
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0203 11:09:39.931037   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:39.931037   12544 main.go:141] libmachine: Writing magic tar header
	I0203 11:09:39.931037   12544 main.go:141] libmachine: Writing SSH key tar header
	I0203 11:09:39.943513   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0203 11:09:43.006544   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:09:43.006803   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:43.006803   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\disk.vhd' -SizeBytes 20000MB
	I0203 11:09:45.406729   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:09:45.406729   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:45.406820   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-429000-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0203 11:09:48.801462   12544 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-429000-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0203 11:09:48.801699   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:48.801803   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-429000-m03 -DynamicMemoryEnabled $false
	I0203 11:09:50.858092   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:09:50.858092   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:50.858899   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-429000-m03 -Count 2
	I0203 11:09:52.891961   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:09:52.892957   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:52.892957   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-429000-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\boot2docker.iso'
	I0203 11:09:55.284880   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:09:55.284880   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:55.284880   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-429000-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\disk.vhd'
	I0203 11:09:57.683436   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:09:57.683513   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:09:57.683513   12544 main.go:141] libmachine: Starting VM...
	I0203 11:09:57.683513   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-429000-m03
	I0203 11:10:00.518485   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:10:00.518485   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:00.518485   12544 main.go:141] libmachine: Waiting for host to start...
	I0203 11:10:00.518854   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:02.612782   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:02.612782   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:02.612782   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:10:04.936907   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:10:04.936907   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:05.937751   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:07.926762   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:07.927453   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:07.927453   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:10:10.215940   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:10:10.216737   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:11.217481   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:13.208664   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:13.209681   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:13.209788   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:10:15.571509   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:10:15.572234   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:16.573134   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:18.622142   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:18.622142   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:18.622498   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:10:20.956476   12544 main.go:141] libmachine: [stdout =====>] : 
	I0203 11:10:20.957345   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:21.958260   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:24.029618   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:24.029618   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:24.030475   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:10:26.460332   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:10:26.460839   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:26.460839   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:28.448820   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:28.449039   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:28.449039   12544 machine.go:93] provisionDockerMachine start ...
	I0203 11:10:28.449039   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:30.509179   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:30.509179   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:30.509628   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:10:32.922209   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:10:32.922209   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:32.926084   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:10:32.942328   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.0.10 22 <nil> <nil>}
	I0203 11:10:32.942485   12544 main.go:141] libmachine: About to run SSH command:
	hostname
	I0203 11:10:33.080805   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0203 11:10:33.080805   12544 buildroot.go:166] provisioning hostname "ha-429000-m03"
	I0203 11:10:33.080805   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:35.060475   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:35.060475   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:35.060846   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:10:37.496864   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:10:37.496943   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:37.501226   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:10:37.501851   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.0.10 22 <nil> <nil>}
	I0203 11:10:37.501851   12544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-429000-m03 && echo "ha-429000-m03" | sudo tee /etc/hostname
	I0203 11:10:37.663829   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-429000-m03
	
	I0203 11:10:37.663829   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:39.663803   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:39.663893   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:39.663965   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:10:42.059871   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:10:42.060507   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:42.066805   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:10:42.066805   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.0.10 22 <nil> <nil>}
	I0203 11:10:42.066805   12544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-429000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-429000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-429000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 11:10:42.208469   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 11:10:42.208469   12544 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0203 11:10:42.208469   12544 buildroot.go:174] setting up certificates
	I0203 11:10:42.208469   12544 provision.go:84] configureAuth start
	I0203 11:10:42.208469   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:44.162647   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:44.163302   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:44.163392   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:10:46.544354   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:10:46.544354   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:46.544354   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:48.548493   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:48.548493   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:48.548567   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:10:50.937084   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:10:50.937084   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:50.937392   12544 provision.go:143] copyHostCerts
	I0203 11:10:50.937392   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0203 11:10:50.937392   12544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0203 11:10:50.937392   12544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0203 11:10:50.938082   12544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0203 11:10:50.938799   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0203 11:10:50.938799   12544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0203 11:10:50.938799   12544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0203 11:10:50.939474   12544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0203 11:10:50.940079   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0203 11:10:50.940079   12544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0203 11:10:50.940079   12544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0203 11:10:50.940786   12544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0203 11:10:50.941382   12544 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-429000-m03 san=[127.0.0.1 172.25.0.10 ha-429000-m03 localhost minikube]
	I0203 11:10:51.165975   12544 provision.go:177] copyRemoteCerts
	I0203 11:10:51.175173   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 11:10:51.175173   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:53.173291   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:53.173447   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:53.173501   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:10:55.570280   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:10:55.570719   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:55.571021   12544 sshutil.go:53] new ssh client: &{IP:172.25.0.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\id_rsa Username:docker}
	I0203 11:10:55.671119   12544 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4958955s)
	I0203 11:10:55.671119   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0203 11:10:55.671504   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0203 11:10:55.718915   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0203 11:10:55.719276   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0203 11:10:55.764976   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0203 11:10:55.764976   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0203 11:10:55.811912   12544 provision.go:87] duration metric: took 13.6032877s to configureAuth
	I0203 11:10:55.811975   12544 buildroot.go:189] setting minikube options for container-runtime
	I0203 11:10:55.812552   12544 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:10:55.812629   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:10:57.745829   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:10:57.745829   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:10:57.745829   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:00.151002   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:11:00.151053   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:00.154873   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:11:00.155135   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.0.10 22 <nil> <nil>}
	I0203 11:11:00.155135   12544 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 11:11:00.292396   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0203 11:11:00.292496   12544 buildroot.go:70] root file system type: tmpfs
	I0203 11:11:00.292648   12544 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 11:11:00.292730   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:11:02.280441   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:02.280566   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:02.280655   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:04.628377   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:11:04.628377   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:04.632783   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:11:04.633307   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.0.10 22 <nil> <nil>}
	I0203 11:11:04.633396   12544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.12.47"
	Environment="NO_PROXY=172.25.12.47,172.25.13.142"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 11:11:04.800327   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.12.47
	Environment=NO_PROXY=172.25.12.47,172.25.13.142
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 11:11:04.800440   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:11:06.825890   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:06.825890   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:06.826760   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:09.224618   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:11:09.224618   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:09.228837   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:11:09.228837   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.0.10 22 <nil> <nil>}
	I0203 11:11:09.228837   12544 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 11:11:11.430762   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0203 11:11:11.430762   12544 machine.go:96] duration metric: took 42.9812331s to provisionDockerMachine
	I0203 11:11:11.430762   12544 client.go:171] duration metric: took 1m48.3869323s to LocalClient.Create
	I0203 11:11:11.430762   12544 start.go:167] duration metric: took 1m48.3869323s to libmachine.API.Create "ha-429000"
	I0203 11:11:11.430762   12544 start.go:293] postStartSetup for "ha-429000-m03" (driver="hyperv")
	I0203 11:11:11.431296   12544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 11:11:11.439364   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 11:11:11.439364   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:11:13.411191   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:13.411290   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:13.411290   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:15.793517   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:11:15.793517   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:15.793517   12544 sshutil.go:53] new ssh client: &{IP:172.25.0.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\id_rsa Username:docker}
	I0203 11:11:15.896868   12544 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4573592s)
	I0203 11:11:15.904801   12544 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 11:11:15.912528   12544 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 11:11:15.912617   12544 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0203 11:11:15.912645   12544 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0203 11:11:15.913720   12544 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> 54522.pem in /etc/ssl/certs
	I0203 11:11:15.913720   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /etc/ssl/certs/54522.pem
	I0203 11:11:15.922180   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 11:11:15.940792   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /etc/ssl/certs/54522.pem (1708 bytes)
	I0203 11:11:15.993040   12544 start.go:296] duration metric: took 4.5615949s for postStartSetup
	I0203 11:11:15.995230   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:11:17.980452   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:17.980452   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:17.980631   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:20.378636   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:11:20.378790   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:20.379023   12544 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\config.json ...
	I0203 11:11:20.380993   12544 start.go:128] duration metric: took 1m57.342129s to createHost
	I0203 11:11:20.381126   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:11:22.332478   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:22.332478   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:22.332478   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:24.725597   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:11:24.725597   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:24.733207   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:11:24.733728   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.0.10 22 <nil> <nil>}
	I0203 11:11:24.733728   12544 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0203 11:11:24.864680   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738581084.879791398
	
	I0203 11:11:24.864680   12544 fix.go:216] guest clock: 1738581084.879791398
	I0203 11:11:24.864680   12544 fix.go:229] Guest: 2025-02-03 11:11:24.879791398 +0000 UTC Remote: 2025-02-03 11:11:20.3810596 +0000 UTC m=+523.839750701 (delta=4.498731798s)
	I0203 11:11:24.865278   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:11:26.903708   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:26.904409   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:26.904462   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:29.265049   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:11:29.265049   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:29.269719   12544 main.go:141] libmachine: Using SSH client type: native
	I0203 11:11:29.270272   12544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.0.10 22 <nil> <nil>}
	I0203 11:11:29.270349   12544 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1738581084
	I0203 11:11:29.408982   12544 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb  3 11:11:24 UTC 2025
	
	I0203 11:11:29.408982   12544 fix.go:236] clock set: Mon Feb  3 11:11:24 UTC 2025
	 (err=<nil>)
	I0203 11:11:29.409047   12544 start.go:83] releasing machines lock for "ha-429000-m03", held for 2m6.3709259s
	I0203 11:11:29.409047   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:11:31.393155   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:31.393155   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:31.393155   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:33.771458   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:11:33.771553   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:33.774457   12544 out.go:177] * Found network options:
	I0203 11:11:33.776912   12544 out.go:177]   - NO_PROXY=172.25.12.47,172.25.13.142
	W0203 11:11:33.778688   12544 proxy.go:119] fail to check proxy env: Error ip not in block
	W0203 11:11:33.778688   12544 proxy.go:119] fail to check proxy env: Error ip not in block
	I0203 11:11:33.781199   12544 out.go:177]   - NO_PROXY=172.25.12.47,172.25.13.142
	W0203 11:11:33.784975   12544 proxy.go:119] fail to check proxy env: Error ip not in block
	W0203 11:11:33.784975   12544 proxy.go:119] fail to check proxy env: Error ip not in block
	W0203 11:11:33.785960   12544 proxy.go:119] fail to check proxy env: Error ip not in block
	W0203 11:11:33.785960   12544 proxy.go:119] fail to check proxy env: Error ip not in block
	I0203 11:11:33.788320   12544 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0203 11:11:33.788320   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:11:33.795079   12544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0203 11:11:33.795079   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:11:35.847884   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:35.848155   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:35.848318   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:35.848801   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:35.848801   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:35.849051   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:38.333262   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:11:38.333262   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:38.333262   12544 sshutil.go:53] new ssh client: &{IP:172.25.0.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\id_rsa Username:docker}
	I0203 11:11:38.357258   12544 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:11:38.357258   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:38.357258   12544 sshutil.go:53] new ssh client: &{IP:172.25.0.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\id_rsa Username:docker}
	I0203 11:11:38.427442   12544 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.6381021s)
	W0203 11:11:38.427442   12544 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0203 11:11:38.446040   12544 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.6508082s)
	W0203 11:11:38.446129   12544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 11:11:38.456426   12544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 11:11:38.485488   12544 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0203 11:11:38.485488   12544 start.go:495] detecting cgroup driver to use...
	I0203 11:11:38.485488   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 11:11:38.528687   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0203 11:11:38.542781   12544 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0203 11:11:38.542878   12544 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0203 11:11:38.558633   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0203 11:11:38.582061   12544 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 11:11:38.591932   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0203 11:11:38.620258   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 11:11:38.647297   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 11:11:38.675785   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 11:11:38.702790   12544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 11:11:38.731172   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 11:11:38.759644   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0203 11:11:38.788222   12544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0203 11:11:38.814231   12544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 11:11:38.832509   12544 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 11:11:38.840496   12544 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0203 11:11:38.868270   12544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 11:11:38.892429   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:11:39.085127   12544 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 11:11:39.118803   12544 start.go:495] detecting cgroup driver to use...
	I0203 11:11:39.126327   12544 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 11:11:39.158845   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 11:11:39.187199   12544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 11:11:39.218421   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 11:11:39.250023   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 11:11:39.281732   12544 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0203 11:11:39.342180   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 11:11:39.366267   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 11:11:39.406566   12544 ssh_runner.go:195] Run: which cri-dockerd
	I0203 11:11:39.420442   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0203 11:11:39.436907   12544 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0203 11:11:39.476269   12544 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 11:11:39.668765   12544 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 11:11:39.848458   12544 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 11:11:39.849451   12544 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0203 11:11:39.888969   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:11:40.082254   12544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 11:11:42.668594   12544 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5863098s)
	I0203 11:11:42.678853   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0203 11:11:42.710528   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 11:11:42.741505   12544 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0203 11:11:42.931042   12544 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 11:11:43.121490   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:11:43.301125   12544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0203 11:11:43.341175   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 11:11:43.373242   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:11:43.566660   12544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0203 11:11:43.671244   12544 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0203 11:11:43.680540   12544 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0203 11:11:43.692621   12544 start.go:563] Will wait 60s for crictl version
	I0203 11:11:43.700499   12544 ssh_runner.go:195] Run: which crictl
	I0203 11:11:43.715098   12544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 11:11:43.767299   12544 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0203 11:11:43.774365   12544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 11:11:43.814332   12544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 11:11:43.851343   12544 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0203 11:11:43.853834   12544 out.go:177]   - env NO_PROXY=172.25.12.47
	I0203 11:11:43.857383   12544 out.go:177]   - env NO_PROXY=172.25.12.47,172.25.13.142
	I0203 11:11:43.859514   12544 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0203 11:11:43.863617   12544 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0203 11:11:43.863617   12544 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0203 11:11:43.863617   12544 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0203 11:11:43.863617   12544 ip.go:211] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:37:32:ac Flags:up|broadcast|multicast|running}
	I0203 11:11:43.866837   12544 ip.go:214] interface addr: fe80::c77d:5c4b:3bd9:9577/64
	I0203 11:11:43.866837   12544 ip.go:214] interface addr: 172.25.0.1/20
	I0203 11:11:43.873400   12544 ssh_runner.go:195] Run: grep 172.25.0.1	host.minikube.internal$ /etc/hosts
	I0203 11:11:43.880316   12544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:11:43.905418   12544 mustload.go:65] Loading cluster: ha-429000
	I0203 11:11:43.905925   12544 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:11:43.906482   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:11:45.875968   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:45.876041   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:45.876041   12544 host.go:66] Checking if "ha-429000" exists ...
	I0203 11:11:45.876532   12544 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000 for IP: 172.25.0.10
	I0203 11:11:45.876532   12544 certs.go:194] generating shared ca certs ...
	I0203 11:11:45.876532   12544 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:11:45.877281   12544 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0203 11:11:45.877440   12544 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0203 11:11:45.877440   12544 certs.go:256] generating profile certs ...
	I0203 11:11:45.878213   12544 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\client.key
	I0203 11:11:45.878213   12544 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.a3f7526a
	I0203 11:11:45.878213   12544 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.a3f7526a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.12.47 172.25.13.142 172.25.0.10 172.25.15.254]
	I0203 11:11:45.988705   12544 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.a3f7526a ...
	I0203 11:11:45.988705   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.a3f7526a: {Name:mk1be027ea55560d27ff8cb8e301fd81e5b5b837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:11:45.989687   12544 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.a3f7526a ...
	I0203 11:11:45.989687   12544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.a3f7526a: {Name:mk37b37b896fde1ac629a06ce6b4f6563adaa9dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 11:11:45.990196   12544 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt.a3f7526a -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt
	I0203 11:11:46.007610   12544 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key.a3f7526a -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key
	I0203 11:11:46.008617   12544 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.key
	I0203 11:11:46.008617   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0203 11:11:46.008617   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0203 11:11:46.008617   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0203 11:11:46.008617   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0203 11:11:46.008617   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0203 11:11:46.008617   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0203 11:11:46.009623   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0203 11:11:46.009623   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0203 11:11:46.009623   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem (1338 bytes)
	W0203 11:11:46.010610   12544 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452_empty.pem, impossibly tiny 0 bytes
	I0203 11:11:46.010610   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0203 11:11:46.010610   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0203 11:11:46.010610   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0203 11:11:46.010610   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0203 11:11:46.011639   12544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem (1708 bytes)
	I0203 11:11:46.011836   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem -> /usr/share/ca-certificates/5452.pem
	I0203 11:11:46.011989   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /usr/share/ca-certificates/54522.pem
	I0203 11:11:46.012066   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:11:46.012231   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:11:48.004707   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:48.004707   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:48.004783   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:50.356353   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:11:50.356353   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:50.357512   12544 sshutil.go:53] new ssh client: &{IP:172.25.12.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\id_rsa Username:docker}
	I0203 11:11:50.454373   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0203 11:11:50.462142   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0203 11:11:50.493099   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0203 11:11:50.499380   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0203 11:11:50.528039   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0203 11:11:50.535100   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0203 11:11:50.563056   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0203 11:11:50.569484   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0203 11:11:50.595729   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0203 11:11:50.603095   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0203 11:11:50.629965   12544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0203 11:11:50.637077   12544 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0203 11:11:50.657885   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 11:11:50.706110   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0203 11:11:50.752374   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 11:11:50.798131   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0203 11:11:50.850022   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0203 11:11:50.894878   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0203 11:11:50.943862   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 11:11:50.990387   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-429000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0203 11:11:51.034907   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem --> /usr/share/ca-certificates/5452.pem (1338 bytes)
	I0203 11:11:51.079174   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /usr/share/ca-certificates/54522.pem (1708 bytes)
	I0203 11:11:51.123129   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 11:11:51.170977   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0203 11:11:51.202841   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0203 11:11:51.235211   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0203 11:11:51.265247   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0203 11:11:51.295918   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0203 11:11:51.326702   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0203 11:11:51.357616   12544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0203 11:11:51.397694   12544 ssh_runner.go:195] Run: openssl version
	I0203 11:11:51.414533   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5452.pem && ln -fs /usr/share/ca-certificates/5452.pem /etc/ssl/certs/5452.pem"
	I0203 11:11:51.441640   12544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5452.pem
	I0203 11:11:51.449984   12544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:45 /usr/share/ca-certificates/5452.pem
	I0203 11:11:51.457631   12544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5452.pem
	I0203 11:11:51.473803   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5452.pem /etc/ssl/certs/51391683.0"
	I0203 11:11:51.502188   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54522.pem && ln -fs /usr/share/ca-certificates/54522.pem /etc/ssl/certs/54522.pem"
	I0203 11:11:51.528913   12544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54522.pem
	I0203 11:11:51.535388   12544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:45 /usr/share/ca-certificates/54522.pem
	I0203 11:11:51.542698   12544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54522.pem
	I0203 11:11:51.560653   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/54522.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 11:11:51.587691   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 11:11:51.615327   12544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:11:51.621985   12544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:11:51.630578   12544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 11:11:51.647306   12544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 11:11:51.676810   12544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 11:11:51.686256   12544 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0203 11:11:51.686256   12544 kubeadm.go:934] updating node {m03 172.25.0.10 8443 v1.32.1 docker true true} ...
	I0203 11:11:51.686256   12544 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-429000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.0.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-429000 Namespace:default APIServerHAVIP:172.25.15.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0203 11:11:51.686256   12544 kube-vip.go:115] generating kube-vip config ...
	I0203 11:11:51.695053   12544 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0203 11:11:51.723318   12544 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0203 11:11:51.723318   12544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.15.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0203 11:11:51.731211   12544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0203 11:11:51.754103   12544 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.1': No such file or directory
	
	Initiating transfer...
	I0203 11:11:51.761708   12544 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.1
	I0203 11:11:51.780644   12544 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
	I0203 11:11:51.780708   12544 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet.sha256
	I0203 11:11:51.780644   12544 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm.sha256
	I0203 11:11:51.780790   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl -> /var/lib/minikube/binaries/v1.32.1/kubectl
	I0203 11:11:51.780790   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm -> /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0203 11:11:51.791364   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:11:51.791364   12544 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0203 11:11:51.793347   12544 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl
	I0203 11:11:51.810553   12544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet -> /var/lib/minikube/binaries/v1.32.1/kubelet
	I0203 11:11:51.810553   12544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubeadm': No such file or directory
	I0203 11:11:51.810553   12544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubectl': No such file or directory
	I0203 11:11:51.811551   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl --> /var/lib/minikube/binaries/v1.32.1/kubectl (57323672 bytes)
	I0203 11:11:51.811551   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm --> /var/lib/minikube/binaries/v1.32.1/kubeadm (70942872 bytes)
	I0203 11:11:51.821243   12544 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet
	I0203 11:11:51.888584   12544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubelet': No such file or directory
	I0203 11:11:51.888777   12544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet --> /var/lib/minikube/binaries/v1.32.1/kubelet (77398276 bytes)
	I0203 11:11:52.959826   12544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0203 11:11:52.978273   12544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0203 11:11:53.010417   12544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 11:11:53.043140   12544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0203 11:11:53.084942   12544 ssh_runner.go:195] Run: grep 172.25.15.254	control-plane.minikube.internal$ /etc/hosts
	I0203 11:11:53.091003   12544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.15.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 11:11:53.121543   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:11:53.305437   12544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:11:53.336247   12544 host.go:66] Checking if "ha-429000" exists ...
	I0203 11:11:53.336850   12544 start.go:317] joinCluster: &{Name:ha-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-429000 Namespace:default APIServerHAVIP:172.
25.15.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.12.47 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.13.142 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.25.0.10 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-
provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 11:11:53.336850   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0203 11:11:53.336850   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:11:55.329735   12544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:11:55.329735   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:55.329735   12544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:11:57.692163   12544 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:11:57.692163   12544 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:11:57.693333   12544 sshutil.go:53] new ssh client: &{IP:172.25.12.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\id_rsa Username:docker}
	I0203 11:11:57.886903   12544 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.550001s)
	I0203 11:11:57.886989   12544 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.25.0.10 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 11:11:57.887141   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token g2pn61.gbq976xywc4o46as --discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-429000-m03 --control-plane --apiserver-advertise-address=172.25.0.10 --apiserver-bind-port=8443"
	I0203 11:12:39.040256   12544 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token g2pn61.gbq976xywc4o46as --discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-429000-m03 --control-plane --apiserver-advertise-address=172.25.0.10 --apiserver-bind-port=8443": (41.1526455s)
	I0203 11:12:39.040256   12544 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0203 11:12:39.875986   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-429000-m03 minikube.k8s.io/updated_at=2025_02_03T11_12_39_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d minikube.k8s.io/name=ha-429000 minikube.k8s.io/primary=false
	I0203 11:12:40.066803   12544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-429000-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0203 11:12:40.311564   12544 start.go:319] duration metric: took 46.9741786s to joinCluster
	I0203 11:12:40.311658   12544 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.25.0.10 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 11:12:40.312405   12544 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:12:40.314860   12544 out.go:177] * Verifying Kubernetes components...
	I0203 11:12:40.326973   12544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 11:12:40.741212   12544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 11:12:40.801220   12544 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 11:12:40.802217   12544 kapi.go:59] client config for ha-429000: &rest.Config{Host:"https://172.25.15.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-429000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-429000\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x219e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0203 11:12:40.802217   12544 kubeadm.go:483] Overriding stale ClientConfig host https://172.25.15.254:8443 with https://172.25.12.47:8443
	I0203 11:12:40.802217   12544 node_ready.go:35] waiting up to 6m0s for node "ha-429000-m03" to be "Ready" ...
	I0203 11:12:40.803214   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:40.803214   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:40.803214   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:40.803214   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:40.818551   12544 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0203 11:12:41.304383   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:41.304383   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:41.304383   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:41.304383   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:41.309699   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:41.803415   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:41.803415   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:41.803415   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:41.803415   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:41.809123   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:42.303355   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:42.303355   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:42.303355   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:42.303355   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:42.308568   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:42.803569   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:42.803569   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:42.803569   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:42.803569   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:42.817614   12544 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0203 11:12:42.818557   12544 node_ready.go:53] node "ha-429000-m03" has status "Ready":"False"
	I0203 11:12:43.303540   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:43.303540   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:43.303540   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:43.303540   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:43.326433   12544 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0203 11:12:43.803930   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:43.803930   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:43.803930   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:43.803930   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:43.808716   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:12:44.303778   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:44.303778   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:44.303778   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:44.303778   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:44.309928   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:12:44.803763   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:44.803763   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:44.803763   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:44.803763   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:44.817986   12544 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0203 11:12:44.818768   12544 node_ready.go:53] node "ha-429000-m03" has status "Ready":"False"
	I0203 11:12:45.303563   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:45.304040   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:45.304090   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:45.304090   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:45.312045   12544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 11:12:45.804070   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:45.804070   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:45.804070   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:45.804070   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:45.809608   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:46.303794   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:46.303972   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:46.303972   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:46.303972   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:46.312372   12544 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0203 11:12:46.803905   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:46.803905   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:46.803905   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:46.803905   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:46.809375   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:47.304447   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:47.304447   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:47.304447   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:47.304447   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:47.309172   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:12:47.309926   12544 node_ready.go:53] node "ha-429000-m03" has status "Ready":"False"
	I0203 11:12:47.803564   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:47.803564   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:47.803564   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:47.803564   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:47.808932   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:48.304232   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:48.304232   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:48.304232   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:48.304232   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:48.309626   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:48.803998   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:48.803998   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:48.803998   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:48.803998   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:48.809236   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:49.303933   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:49.303933   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:49.303933   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:49.303933   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:49.309008   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:49.804097   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:49.804190   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:49.804190   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:49.804190   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:49.814160   12544 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0203 11:12:49.814704   12544 node_ready.go:53] node "ha-429000-m03" has status "Ready":"False"
	I0203 11:12:50.304472   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:50.304540   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:50.304540   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:50.304540   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:50.309793   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:50.803704   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:50.803704   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:50.803704   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:50.803704   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:50.808899   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:51.303673   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:51.303673   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:51.303673   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:51.303673   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:51.309059   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:51.803824   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:51.803824   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:51.803824   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:51.803824   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:51.808690   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:12:52.303348   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:52.303348   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:52.303348   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:52.303348   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:52.308777   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:52.309411   12544 node_ready.go:53] node "ha-429000-m03" has status "Ready":"False"
	I0203 11:12:52.804217   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:52.804217   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:52.804217   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:52.804217   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:52.809727   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:53.304122   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:53.304122   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:53.304122   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:53.304122   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:53.307906   12544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 11:12:53.803839   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:53.803839   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:53.803839   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:53.803839   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:53.808607   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:12:54.304264   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:54.304264   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:54.304264   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:54.304264   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:54.310236   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:54.311000   12544 node_ready.go:53] node "ha-429000-m03" has status "Ready":"False"
	I0203 11:12:54.803813   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:54.803813   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:54.803813   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:54.803813   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:54.815141   12544 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0203 11:12:55.304120   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:55.304120   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:55.304120   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:55.304120   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:55.309119   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:12:55.803844   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:55.803844   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:55.803844   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:55.803844   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:55.809119   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:56.303992   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:56.303992   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:56.303992   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:56.303992   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:56.311805   12544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 11:12:56.312235   12544 node_ready.go:53] node "ha-429000-m03" has status "Ready":"False"
	I0203 11:12:56.804145   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:56.804145   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:56.804145   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:56.804145   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:56.809681   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:57.303756   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:57.303756   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:57.303756   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:57.303756   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:57.312008   12544 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0203 11:12:57.803822   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:57.803822   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:57.803822   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:57.803822   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:57.809325   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:58.304400   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:58.304400   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:58.304400   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:58.304400   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:58.310029   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:58.803498   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:58.803498   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:58.803498   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:58.803498   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:58.808587   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:58.809347   12544 node_ready.go:49] node "ha-429000-m03" has status "Ready":"True"
	I0203 11:12:58.809410   12544 node_ready.go:38] duration metric: took 18.0059907s for node "ha-429000-m03" to be "Ready" ...
	I0203 11:12:58.809410   12544 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 11:12:58.809540   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods
	I0203 11:12:58.809685   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:58.809685   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:58.809685   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:58.841693   12544 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0203 11:12:58.850467   12544 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-5jzvf" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:58.851042   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-5jzvf
	I0203 11:12:58.851124   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:58.851124   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:58.851124   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:58.854120   12544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 11:12:58.855116   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:12:58.855116   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:58.855734   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:58.855734   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:58.859866   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:12:58.860457   12544 pod_ready.go:93] pod "coredns-668d6bf9bc-5jzvf" in "kube-system" namespace has status "Ready":"True"
	I0203 11:12:58.860542   12544 pod_ready.go:82] duration metric: took 10.075ms for pod "coredns-668d6bf9bc-5jzvf" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:58.860542   12544 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-r5pf5" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:58.860674   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-r5pf5
	I0203 11:12:58.860674   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:58.860674   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:58.860674   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:58.864603   12544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 11:12:58.865373   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:12:58.865373   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:58.865373   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:58.865373   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:58.869479   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:12:58.870270   12544 pod_ready.go:93] pod "coredns-668d6bf9bc-r5pf5" in "kube-system" namespace has status "Ready":"True"
	I0203 11:12:58.870270   12544 pod_ready.go:82] duration metric: took 9.7283ms for pod "coredns-668d6bf9bc-r5pf5" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:58.870329   12544 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:58.870417   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-429000
	I0203 11:12:58.870417   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:58.870417   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:58.870417   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:58.874876   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:12:58.875963   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:12:58.875963   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:58.875963   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:58.876021   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:58.882228   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:12:58.882791   12544 pod_ready.go:93] pod "etcd-ha-429000" in "kube-system" namespace has status "Ready":"True"
	I0203 11:12:58.882791   12544 pod_ready.go:82] duration metric: took 12.4615ms for pod "etcd-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:58.882852   12544 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:58.882920   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-429000-m02
	I0203 11:12:58.882920   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:58.882920   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:58.882920   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:58.894495   12544 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0203 11:12:58.895072   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:12:58.895072   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:58.895072   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:58.895072   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:58.899816   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:12:58.900654   12544 pod_ready.go:93] pod "etcd-ha-429000-m02" in "kube-system" namespace has status "Ready":"True"
	I0203 11:12:58.900654   12544 pod_ready.go:82] duration metric: took 17.8017ms for pod "etcd-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:58.900654   12544 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-429000-m03" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:59.004530   12544 request.go:632] Waited for 103.8753ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-429000-m03
	I0203 11:12:59.004530   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-429000-m03
	I0203 11:12:59.004530   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:59.004530   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:59.004530   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:59.008604   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:12:59.203869   12544 request.go:632] Waited for 194.6172ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:59.203869   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:12:59.203869   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:59.203869   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:59.203869   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:59.209854   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:59.210482   12544 pod_ready.go:93] pod "etcd-ha-429000-m03" in "kube-system" namespace has status "Ready":"True"
	I0203 11:12:59.210482   12544 pod_ready.go:82] duration metric: took 309.8251ms for pod "etcd-ha-429000-m03" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:59.210482   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:59.404119   12544 request.go:632] Waited for 193.6342ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-429000
	I0203 11:12:59.404119   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-429000
	I0203 11:12:59.404119   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:59.404119   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:59.404119   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:59.409342   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:12:59.604572   12544 request.go:632] Waited for 194.6357ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:12:59.604897   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:12:59.604968   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:59.605035   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:59.605053   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:59.612484   12544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 11:12:59.613084   12544 pod_ready.go:93] pod "kube-apiserver-ha-429000" in "kube-system" namespace has status "Ready":"True"
	I0203 11:12:59.613191   12544 pod_ready.go:82] duration metric: took 402.7035ms for pod "kube-apiserver-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:59.613191   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:12:59.803932   12544 request.go:632] Waited for 190.7398ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-429000-m02
	I0203 11:12:59.803932   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-429000-m02
	I0203 11:12:59.803932   12544 round_trippers.go:469] Request Headers:
	I0203 11:12:59.803932   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:12:59.803932   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:12:59.808812   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:13:00.004001   12544 request.go:632] Waited for 193.9864ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:13:00.004001   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:13:00.004001   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:00.004001   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:00.004001   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:00.009450   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:13:00.009618   12544 pod_ready.go:93] pod "kube-apiserver-ha-429000-m02" in "kube-system" namespace has status "Ready":"True"
	I0203 11:13:00.010144   12544 pod_ready.go:82] duration metric: took 396.9493ms for pod "kube-apiserver-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:00.010144   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-429000-m03" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:00.203523   12544 request.go:632] Waited for 193.2804ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-429000-m03
	I0203 11:13:00.203523   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-429000-m03
	I0203 11:13:00.203523   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:00.203523   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:00.203523   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:00.208875   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:13:00.403799   12544 request.go:632] Waited for 193.0197ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:13:00.403799   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:13:00.403799   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:00.404287   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:00.404287   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:00.409493   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:13:00.410154   12544 pod_ready.go:93] pod "kube-apiserver-ha-429000-m03" in "kube-system" namespace has status "Ready":"True"
	I0203 11:13:00.410218   12544 pod_ready.go:82] duration metric: took 400.0688ms for pod "kube-apiserver-ha-429000-m03" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:00.410218   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:00.604493   12544 request.go:632] Waited for 194.2103ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-429000
	I0203 11:13:00.604753   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-429000
	I0203 11:13:00.604753   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:00.604753   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:00.604753   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:00.612186   12544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 11:13:00.804326   12544 request.go:632] Waited for 191.1461ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:13:00.804326   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:13:00.804326   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:00.804326   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:00.804326   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:00.809223   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:13:00.810204   12544 pod_ready.go:93] pod "kube-controller-manager-ha-429000" in "kube-system" namespace has status "Ready":"True"
	I0203 11:13:00.810262   12544 pod_ready.go:82] duration metric: took 400.0394ms for pod "kube-controller-manager-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:00.810262   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:01.003830   12544 request.go:632] Waited for 193.4936ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-429000-m02
	I0203 11:13:01.003830   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-429000-m02
	I0203 11:13:01.003830   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:01.003830   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:01.003830   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:01.008814   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:13:01.203959   12544 request.go:632] Waited for 193.6638ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:13:01.204178   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:13:01.204178   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:01.204178   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:01.204178   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:01.208879   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:13:01.209614   12544 pod_ready.go:93] pod "kube-controller-manager-ha-429000-m02" in "kube-system" namespace has status "Ready":"True"
	I0203 11:13:01.209680   12544 pod_ready.go:82] duration metric: took 399.414ms for pod "kube-controller-manager-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:01.209680   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-429000-m03" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:01.404501   12544 request.go:632] Waited for 194.7278ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-429000-m03
	I0203 11:13:01.404758   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-429000-m03
	I0203 11:13:01.404758   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:01.404758   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:01.404758   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:01.412103   12544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 11:13:01.604712   12544 request.go:632] Waited for 191.5876ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:13:01.604712   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:13:01.604712   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:01.604712   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:01.604712   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:01.608911   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:13:01.610302   12544 pod_ready.go:93] pod "kube-controller-manager-ha-429000-m03" in "kube-system" namespace has status "Ready":"True"
	I0203 11:13:01.610404   12544 pod_ready.go:82] duration metric: took 400.7189ms for pod "kube-controller-manager-ha-429000-m03" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:01.610404   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2n5cz" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:01.804232   12544 request.go:632] Waited for 193.7228ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2n5cz
	I0203 11:13:01.804232   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2n5cz
	I0203 11:13:01.804232   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:01.804232   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:01.804232   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:01.809242   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:13:02.004365   12544 request.go:632] Waited for 194.1121ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:13:02.004365   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:13:02.004365   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:02.004365   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:02.004365   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:02.009884   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:13:02.010185   12544 pod_ready.go:93] pod "kube-proxy-2n5cz" in "kube-system" namespace has status "Ready":"True"
	I0203 11:13:02.010185   12544 pod_ready.go:82] duration metric: took 399.7771ms for pod "kube-proxy-2n5cz" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:02.010185   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dhm6z" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:02.204520   12544 request.go:632] Waited for 194.3326ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dhm6z
	I0203 11:13:02.204520   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dhm6z
	I0203 11:13:02.204520   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:02.204520   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:02.204520   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:02.209337   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:13:02.404112   12544 request.go:632] Waited for 193.4593ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:13:02.404112   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:13:02.404112   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:02.404112   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:02.404112   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:02.408239   12544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 11:13:02.409842   12544 pod_ready.go:93] pod "kube-proxy-dhm6z" in "kube-system" namespace has status "Ready":"True"
	I0203 11:13:02.409842   12544 pod_ready.go:82] duration metric: took 399.6523ms for pod "kube-proxy-dhm6z" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:02.409842   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m9nhx" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:02.604328   12544 request.go:632] Waited for 194.3267ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m9nhx
	I0203 11:13:02.604635   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m9nhx
	I0203 11:13:02.604635   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:02.604635   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:02.604635   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:02.610172   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:13:02.803866   12544 request.go:632] Waited for 192.7497ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:13:02.803866   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:13:02.803866   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:02.803866   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:02.803866   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:02.812788   12544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 11:13:02.812788   12544 pod_ready.go:93] pod "kube-proxy-m9nhx" in "kube-system" namespace has status "Ready":"True"
	I0203 11:13:02.812788   12544 pod_ready.go:82] duration metric: took 402.9409ms for pod "kube-proxy-m9nhx" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:02.812788   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:03.004672   12544 request.go:632] Waited for 191.8826ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-429000
	I0203 11:13:03.004672   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-429000
	I0203 11:13:03.004672   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:03.005042   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:03.005042   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:03.010888   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:13:03.204523   12544 request.go:632] Waited for 192.7176ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:13:03.204523   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000
	I0203 11:13:03.204523   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:03.204523   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:03.204523   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:03.209585   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:13:03.210850   12544 pod_ready.go:93] pod "kube-scheduler-ha-429000" in "kube-system" namespace has status "Ready":"True"
	I0203 11:13:03.210959   12544 pod_ready.go:82] duration metric: took 398.1665ms for pod "kube-scheduler-ha-429000" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:03.210959   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:03.404602   12544 request.go:632] Waited for 193.5661ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-429000-m02
	I0203 11:13:03.404602   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-429000-m02
	I0203 11:13:03.404602   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:03.404602   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:03.404602   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:03.410175   12544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 11:13:03.604330   12544 request.go:632] Waited for 193.0786ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:13:03.604330   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m02
	I0203 11:13:03.604330   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:03.604330   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:03.604330   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:03.618515   12544 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0203 11:13:03.619010   12544 pod_ready.go:93] pod "kube-scheduler-ha-429000-m02" in "kube-system" namespace has status "Ready":"True"
	I0203 11:13:03.619010   12544 pod_ready.go:82] duration metric: took 408.0464ms for pod "kube-scheduler-ha-429000-m02" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:03.619010   12544 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-429000-m03" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:03.804767   12544 request.go:632] Waited for 185.7548ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-429000-m03
	I0203 11:13:03.805089   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-429000-m03
	I0203 11:13:03.805175   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:03.805175   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:03.805203   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:03.811686   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:13:04.003800   12544 request.go:632] Waited for 191.1085ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:13:04.003800   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes/ha-429000-m03
	I0203 11:13:04.003800   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:04.003800   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:04.003800   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:04.016471   12544 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0203 11:13:04.017503   12544 pod_ready.go:93] pod "kube-scheduler-ha-429000-m03" in "kube-system" namespace has status "Ready":"True"
	I0203 11:13:04.017577   12544 pod_ready.go:82] duration metric: took 398.5629ms for pod "kube-scheduler-ha-429000-m03" in "kube-system" namespace to be "Ready" ...
	I0203 11:13:04.017639   12544 pod_ready.go:39] duration metric: took 5.2081696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 11:13:04.017639   12544 api_server.go:52] waiting for apiserver process to appear ...
	I0203 11:13:04.025943   12544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:13:04.051863   12544 api_server.go:72] duration metric: took 23.7399334s to wait for apiserver process to appear ...
	I0203 11:13:04.051863   12544 api_server.go:88] waiting for apiserver healthz status ...
	I0203 11:13:04.051863   12544 api_server.go:253] Checking apiserver healthz at https://172.25.12.47:8443/healthz ...
	I0203 11:13:04.059728   12544 api_server.go:279] https://172.25.12.47:8443/healthz returned 200:
	ok
	I0203 11:13:04.059843   12544 round_trippers.go:463] GET https://172.25.12.47:8443/version
	I0203 11:13:04.059900   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:04.059900   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:04.059900   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:04.061940   12544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 11:13:04.062014   12544 api_server.go:141] control plane version: v1.32.1
	I0203 11:13:04.062085   12544 api_server.go:131] duration metric: took 10.2228ms to wait for apiserver health ...
	I0203 11:13:04.062120   12544 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 11:13:04.204325   12544 request.go:632] Waited for 142.1338ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods
	I0203 11:13:04.204325   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods
	I0203 11:13:04.204325   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:04.204325   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:04.204325   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:04.215758   12544 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0203 11:13:04.225100   12544 system_pods.go:59] 24 kube-system pods found
	I0203 11:13:04.225180   12544 system_pods.go:61] "coredns-668d6bf9bc-5jzvf" [171e3213-b687-432a-b3a3-231392dddfaf] Running
	I0203 11:13:04.225180   12544 system_pods.go:61] "coredns-668d6bf9bc-r5pf5" [34df0b8e-1ae4-4e3e-a39f-9d9c505a25c4] Running
	I0203 11:13:04.225180   12544 system_pods.go:61] "etcd-ha-429000" [8462336e-5775-446f-99ed-d5a46d8f85b0] Running
	I0203 11:13:04.225180   12544 system_pods.go:61] "etcd-ha-429000-m02" [26a3c348-6476-41c8-b1f0-b2d86f3b77a2] Running
	I0203 11:13:04.225180   12544 system_pods.go:61] "etcd-ha-429000-m03" [ebe571cc-0005-4236-aa38-df20b82601d8] Running
	I0203 11:13:04.225180   12544 system_pods.go:61] "kindnet-d7lbp" [23d86f41-7e30-4da8-924f-4c6aafb9360c] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kindnet-fv8r6" [58d47479-d8ac-4a8a-b5d7-7fc71319598b] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kindnet-ss84t" [b831ad88-827e-45b8-a208-78e6bceb72e3] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-apiserver-ha-429000" [a77b61c0-ca5b-4bf0-a0df-a3f7465c7cfc] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-apiserver-ha-429000-m02" [e3df904b-ddb6-4c43-9bd8-c35136520494] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-apiserver-ha-429000-m03" [bc8b6aae-0084-4361-8e17-479a8e9b4d60] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-controller-manager-ha-429000" [df6cfc76-d0b4-4461-aa2e-cd44ebaec04a] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-controller-manager-ha-429000-m02" [89e18813-ac30-4890-a036-b86f0a9a513f] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-controller-manager-ha-429000-m03" [68b530c4-6823-46b9-a1c6-918cf1443e4a] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-proxy-2n5cz" [aa6ffe60-2b46-473c-b2c4-b45004c6aeeb] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-proxy-dhm6z" [a2f4caab-ad59-402c-b3c8-3da356385c89] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-proxy-m9nhx" [b12c48d5-de9f-4e4e-aff5-953e5f7bf001] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-scheduler-ha-429000" [997f2cf9-4a89-40cd-9d8b-fece398c4a10] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-scheduler-ha-429000-m02" [e619bf3e-cb81-41a0-bfa8-c9f6506a356e] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-scheduler-ha-429000-m03" [46b7bb6f-7c5c-4d09-af82-7b34c6022e7e] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-vip-ha-429000" [4907d066-bd93-4786-a868-9f3bd0a51f4b] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-vip-ha-429000-m02" [a53c671d-cc58-4505-901b-fe00af1f8eaa] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "kube-vip-ha-429000-m03" [1c2bd3bd-fcb7-4fed-9f67-518e4acd72a2] Running
	I0203 11:13:04.225275   12544 system_pods.go:61] "storage-provisioner" [9cea8ac0-e49e-4a9b-8e99-2da32218657c] Running
	I0203 11:13:04.225275   12544 system_pods.go:74] duration metric: took 163.1532ms to wait for pod list to return data ...
	I0203 11:13:04.225275   12544 default_sa.go:34] waiting for default service account to be created ...
	I0203 11:13:04.404143   12544 request.go:632] Waited for 178.8664ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/default/serviceaccounts
	I0203 11:13:04.404143   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/default/serviceaccounts
	I0203 11:13:04.404143   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:04.404143   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:04.404143   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:04.410178   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:13:04.411008   12544 default_sa.go:45] found service account: "default"
	I0203 11:13:04.411008   12544 default_sa.go:55] duration metric: took 185.7309ms for default service account to be created ...
	I0203 11:13:04.411008   12544 system_pods.go:116] waiting for k8s-apps to be running ...
	I0203 11:13:04.603979   12544 request.go:632] Waited for 192.8539ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods
	I0203 11:13:04.603979   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/namespaces/kube-system/pods
	I0203 11:13:04.603979   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:04.603979   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:04.603979   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:04.613186   12544 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0203 11:13:04.626128   12544 system_pods.go:86] 24 kube-system pods found
	I0203 11:13:04.626664   12544 system_pods.go:89] "coredns-668d6bf9bc-5jzvf" [171e3213-b687-432a-b3a3-231392dddfaf] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "coredns-668d6bf9bc-r5pf5" [34df0b8e-1ae4-4e3e-a39f-9d9c505a25c4] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "etcd-ha-429000" [8462336e-5775-446f-99ed-d5a46d8f85b0] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "etcd-ha-429000-m02" [26a3c348-6476-41c8-b1f0-b2d86f3b77a2] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "etcd-ha-429000-m03" [ebe571cc-0005-4236-aa38-df20b82601d8] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kindnet-d7lbp" [23d86f41-7e30-4da8-924f-4c6aafb9360c] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kindnet-fv8r6" [58d47479-d8ac-4a8a-b5d7-7fc71319598b] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kindnet-ss84t" [b831ad88-827e-45b8-a208-78e6bceb72e3] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kube-apiserver-ha-429000" [a77b61c0-ca5b-4bf0-a0df-a3f7465c7cfc] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kube-apiserver-ha-429000-m02" [e3df904b-ddb6-4c43-9bd8-c35136520494] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kube-apiserver-ha-429000-m03" [bc8b6aae-0084-4361-8e17-479a8e9b4d60] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kube-controller-manager-ha-429000" [df6cfc76-d0b4-4461-aa2e-cd44ebaec04a] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kube-controller-manager-ha-429000-m02" [89e18813-ac30-4890-a036-b86f0a9a513f] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kube-controller-manager-ha-429000-m03" [68b530c4-6823-46b9-a1c6-918cf1443e4a] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kube-proxy-2n5cz" [aa6ffe60-2b46-473c-b2c4-b45004c6aeeb] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kube-proxy-dhm6z" [a2f4caab-ad59-402c-b3c8-3da356385c89] Running
	I0203 11:13:04.626664   12544 system_pods.go:89] "kube-proxy-m9nhx" [b12c48d5-de9f-4e4e-aff5-953e5f7bf001] Running
	I0203 11:13:04.626778   12544 system_pods.go:89] "kube-scheduler-ha-429000" [997f2cf9-4a89-40cd-9d8b-fece398c4a10] Running
	I0203 11:13:04.626778   12544 system_pods.go:89] "kube-scheduler-ha-429000-m02" [e619bf3e-cb81-41a0-bfa8-c9f6506a356e] Running
	I0203 11:13:04.626778   12544 system_pods.go:89] "kube-scheduler-ha-429000-m03" [46b7bb6f-7c5c-4d09-af82-7b34c6022e7e] Running
	I0203 11:13:04.626778   12544 system_pods.go:89] "kube-vip-ha-429000" [4907d066-bd93-4786-a868-9f3bd0a51f4b] Running
	I0203 11:13:04.626778   12544 system_pods.go:89] "kube-vip-ha-429000-m02" [a53c671d-cc58-4505-901b-fe00af1f8eaa] Running
	I0203 11:13:04.626778   12544 system_pods.go:89] "kube-vip-ha-429000-m03" [1c2bd3bd-fcb7-4fed-9f67-518e4acd72a2] Running
	I0203 11:13:04.626778   12544 system_pods.go:89] "storage-provisioner" [9cea8ac0-e49e-4a9b-8e99-2da32218657c] Running
	I0203 11:13:04.626778   12544 system_pods.go:126] duration metric: took 215.7673ms to wait for k8s-apps to be running ...
	I0203 11:13:04.626778   12544 system_svc.go:44] waiting for kubelet service to be running ....
	I0203 11:13:04.633785   12544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:13:04.662621   12544 system_svc.go:56] duration metric: took 35.8434ms WaitForService to wait for kubelet
	I0203 11:13:04.662740   12544 kubeadm.go:582] duration metric: took 24.3508042s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 11:13:04.662740   12544 node_conditions.go:102] verifying NodePressure condition ...
	I0203 11:13:04.803979   12544 request.go:632] Waited for 141.1319ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.47:8443/api/v1/nodes
	I0203 11:13:04.803979   12544 round_trippers.go:463] GET https://172.25.12.47:8443/api/v1/nodes
	I0203 11:13:04.803979   12544 round_trippers.go:469] Request Headers:
	I0203 11:13:04.803979   12544 round_trippers.go:473]     Accept: application/json, */*
	I0203 11:13:04.803979   12544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 11:13:04.810465   12544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 11:13:04.811430   12544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 11:13:04.811600   12544 node_conditions.go:123] node cpu capacity is 2
	I0203 11:13:04.811600   12544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 11:13:04.811600   12544 node_conditions.go:123] node cpu capacity is 2
	I0203 11:13:04.811600   12544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 11:13:04.811600   12544 node_conditions.go:123] node cpu capacity is 2
	I0203 11:13:04.811600   12544 node_conditions.go:105] duration metric: took 148.8584ms to run NodePressure ...
	I0203 11:13:04.811600   12544 start.go:241] waiting for startup goroutines ...
	I0203 11:13:04.811705   12544 start.go:255] writing updated cluster config ...
	I0203 11:13:04.820233   12544 ssh_runner.go:195] Run: rm -f paused
	I0203 11:13:04.948449   12544 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0203 11:13:04.952015   12544 out.go:177] * Done! kubectl is now configured to use "ha-429000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 03 11:06:01 ha-429000 dockerd[1451]: time="2025-02-03T11:06:01.628084514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 11:06:01 ha-429000 dockerd[1451]: time="2025-02-03T11:06:01.765888854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 03 11:06:01 ha-429000 dockerd[1451]: time="2025-02-03T11:06:01.766250856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 03 11:06:01 ha-429000 dockerd[1451]: time="2025-02-03T11:06:01.766326357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 11:06:01 ha-429000 dockerd[1451]: time="2025-02-03T11:06:01.766511258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 11:06:01 ha-429000 cri-dockerd[1343]: time="2025-02-03T11:06:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/360f3e80c12181ba5c9502a791c6d37a4bd9eb76dafa9ce6bab8b358efb62d5b/resolv.conf as [nameserver 172.25.0.1]"
	Feb 03 11:06:01 ha-429000 cri-dockerd[1343]: time="2025-02-03T11:06:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2433d530b47c379da05cf21223ef2f866c380ff582510a431dac3f5733591ea4/resolv.conf as [nameserver 172.25.0.1]"
	Feb 03 11:06:02 ha-429000 dockerd[1451]: time="2025-02-03T11:06:02.151812181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 03 11:06:02 ha-429000 dockerd[1451]: time="2025-02-03T11:06:02.151886382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 03 11:06:02 ha-429000 dockerd[1451]: time="2025-02-03T11:06:02.151964882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 11:06:02 ha-429000 dockerd[1451]: time="2025-02-03T11:06:02.152076383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 11:06:02 ha-429000 dockerd[1451]: time="2025-02-03T11:06:02.185274858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 03 11:06:02 ha-429000 dockerd[1451]: time="2025-02-03T11:06:02.185613060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 03 11:06:02 ha-429000 dockerd[1451]: time="2025-02-03T11:06:02.185842061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 11:06:02 ha-429000 dockerd[1451]: time="2025-02-03T11:06:02.186136962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 11:13:40 ha-429000 dockerd[1451]: time="2025-02-03T11:13:40.234018692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 03 11:13:40 ha-429000 dockerd[1451]: time="2025-02-03T11:13:40.234146292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 03 11:13:40 ha-429000 dockerd[1451]: time="2025-02-03T11:13:40.234168192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 11:13:40 ha-429000 dockerd[1451]: time="2025-02-03T11:13:40.234356794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 11:13:40 ha-429000 cri-dockerd[1343]: time="2025-02-03T11:13:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fda7c172f55ef766a8f9d8daa3677620bbe748eb0ec4ea821c244838bdcbbc40/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 03 11:13:41 ha-429000 cri-dockerd[1343]: time="2025-02-03T11:13:41Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Feb 03 11:13:42 ha-429000 dockerd[1451]: time="2025-02-03T11:13:42.118383186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 03 11:13:42 ha-429000 dockerd[1451]: time="2025-02-03T11:13:42.119069194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 03 11:13:42 ha-429000 dockerd[1451]: time="2025-02-03T11:13:42.119169995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 11:13:42 ha-429000 dockerd[1451]: time="2025-02-03T11:13:42.119449398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b9bdb287bef2d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago      Running             busybox                   0                   fda7c172f55ef       busybox-58667487b6-hjbfz
	d82f4a32d763d       c69fa2e9cbf5f                                                                                         27 minutes ago      Running             coredns                   0                   360f3e80c1218       coredns-668d6bf9bc-r5pf5
	d9f3f914a13d8       6e38f40d628db                                                                                         27 minutes ago      Running             storage-provisioner       0                   2433d530b47c3       storage-provisioner
	d7595aa2e7664       c69fa2e9cbf5f                                                                                         27 minutes ago      Running             coredns                   0                   09ac3d992ab71       coredns-668d6bf9bc-5jzvf
	989e99ddf5bb8       kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26              27 minutes ago      Running             kindnet-cni               0                   d1ba5b18f35b5       kindnet-fv8r6
	3ad219fbdb564       e29f9c7391fd9                                                                                         28 minutes ago      Running             kube-proxy                0                   8017e667cdcc1       kube-proxy-dhm6z
	1eff3743dfbdd       ghcr.io/kube-vip/kube-vip@sha256:717b8bef2758c10042d64ae7949201ef7f243d928fce265b04e488e844bf9528     28 minutes ago      Running             kube-vip                  0                   fb97d436f0b00       kube-vip-ha-429000
	4c387526ccbee       2b0d6572d062c                                                                                         28 minutes ago      Running             kube-scheduler            0                   bbc148b7d95a2       kube-scheduler-ha-429000
	77604fa1a1e94       019ee182b58e2                                                                                         28 minutes ago      Running             kube-controller-manager   0                   944302cf57a59       kube-controller-manager-ha-429000
	6c03362e02b8f       a9e7e6b294baf                                                                                         28 minutes ago      Running             etcd                      0                   4e4522c4416d9       etcd-ha-429000
	36ff8ead4e917       95c0bda56fc4d                                                                                         28 minutes ago      Running             kube-apiserver            0                   45e0fe3e074c5       kube-apiserver-ha-429000
	
	
	==> coredns [d7595aa2e766] <==
	[INFO] 10.244.2.2:41426 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000334804s
	[INFO] 10.244.2.2:50198 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000233602s
	[INFO] 10.244.0.4:56085 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000313504s
	[INFO] 10.244.0.4:43627 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000223102s
	[INFO] 10.244.0.4:43721 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000207803s
	[INFO] 10.244.0.4:33104 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000133301s
	[INFO] 10.244.0.4:56284 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115101s
	[INFO] 10.244.0.4:60159 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000220703s
	[INFO] 10.244.1.2:36340 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000095701s
	[INFO] 10.244.1.2:60004 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139301s
	[INFO] 10.244.1.2:54510 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000199702s
	[INFO] 10.244.2.2:46423 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000291004s
	[INFO] 10.244.2.2:43421 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000212203s
	[INFO] 10.244.0.4:36256 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140401s
	[INFO] 10.244.0.4:50758 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000291403s
	[INFO] 10.244.0.4:56332 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078001s
	[INFO] 10.244.1.2:48813 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000328804s
	[INFO] 10.244.1.2:55305 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174902s
	[INFO] 10.244.2.2:60572 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000165302s
	[INFO] 10.244.2.2:37570 - 5 "PTR IN 1.0.25.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000099701s
	[INFO] 10.244.0.4:40645 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163502s
	[INFO] 10.244.0.4:36097 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000327504s
	[INFO] 10.244.0.4:32981 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108801s
	[INFO] 10.244.0.4:58940 - 5 "PTR IN 1.0.25.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000163302s
	[INFO] 10.244.1.2:34333 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000134202s
	
	
	==> coredns [d82f4a32d763] <==
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47772 - 61067 "HINFO IN 4472778490497682898.611741258674588714. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.054223085s
	[INFO] 10.244.2.2:49592 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.221214106s
	[INFO] 10.244.2.2:44736 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.198857072s
	[INFO] 10.244.0.4:56161 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.002871532s
	[INFO] 10.244.1.2:52272 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000232902s
	[INFO] 10.244.2.2:53162 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185402s
	[INFO] 10.244.2.2:48550 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000268603s
	[INFO] 10.244.0.4:54448 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000291303s
	[INFO] 10.244.0.4:40412 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013367151s
	[INFO] 10.244.1.2:41599 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135802s
	[INFO] 10.244.1.2:35082 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000143802s
	[INFO] 10.244.1.2:42027 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227103s
	[INFO] 10.244.1.2:47439 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074301s
	[INFO] 10.244.1.2:58807 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102601s
	[INFO] 10.244.2.2:54735 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188903s
	[INFO] 10.244.2.2:36301 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148201s
	[INFO] 10.244.0.4:35035 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000249503s
	[INFO] 10.244.1.2:34636 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235602s
	[INFO] 10.244.1.2:45611 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000674s
	[INFO] 10.244.2.2:38011 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166102s
	[INFO] 10.244.2.2:58643 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000180902s
	[INFO] 10.244.1.2:53892 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145002s
	[INFO] 10.244.1.2:39281 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000142802s
	[INFO] 10.244.1.2:51636 - 5 "PTR IN 1.0.25.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000131501s
	
	
	==> describe nodes <==
	Name:               ha-429000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-429000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	                    minikube.k8s.io/name=ha-429000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_03T11_05_31_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Feb 2025 11:05:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-429000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Feb 2025 11:33:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Feb 2025 11:30:32 +0000   Mon, 03 Feb 2025 11:05:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Feb 2025 11:30:32 +0000   Mon, 03 Feb 2025 11:05:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Feb 2025 11:30:32 +0000   Mon, 03 Feb 2025 11:05:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Feb 2025 11:30:32 +0000   Mon, 03 Feb 2025 11:06:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.12.47
	  Hostname:    ha-429000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b02458d3503f4e728e9c53efd3caeef4
	  System UUID:                972948bd-9976-b744-b72e-49603552f61d
	  Boot ID:                    3f567654-2fa8-43dc-ac53-52200ead206b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-hjbfz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-668d6bf9bc-5jzvf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 coredns-668d6bf9bc-r5pf5             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-ha-429000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kindnet-fv8r6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      28m
	  kube-system                 kube-apiserver-ha-429000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-ha-429000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-dhm6z                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-ha-429000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-vip-ha-429000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node ha-429000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node ha-429000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node ha-429000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node ha-429000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node ha-429000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node ha-429000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28m                node-controller  Node ha-429000 event: Registered Node ha-429000 in Controller
	  Normal  NodeReady                27m                kubelet          Node ha-429000 status is now: NodeReady
	  Normal  RegisteredNode           24m                node-controller  Node ha-429000 event: Registered Node ha-429000 in Controller
	  Normal  RegisteredNode           20m                node-controller  Node ha-429000 event: Registered Node ha-429000 in Controller
	
	
	Name:               ha-429000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-429000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	                    minikube.k8s.io/name=ha-429000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_02_03T11_09_00_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Feb 2025 11:08:54 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-429000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Feb 2025 11:29:33 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 03 Feb 2025 11:27:57 +0000   Mon, 03 Feb 2025 11:30:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 03 Feb 2025 11:27:57 +0000   Mon, 03 Feb 2025 11:30:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 03 Feb 2025 11:27:57 +0000   Mon, 03 Feb 2025 11:30:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 03 Feb 2025 11:27:57 +0000   Mon, 03 Feb 2025 11:30:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.25.13.142
	  Hostname:    ha-429000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 769190743e2b41f69e10b009f110a981
	  System UUID:                543620fd-d931-a645-b903-0e292a0963ba
	  Boot ID:                    693afb8d-5d43-4d67-ae21-f5181f76ea2c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-k7s2q                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 etcd-ha-429000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         24m
	  kube-system                 kindnet-d7lbp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	  kube-system                 kube-apiserver-ha-429000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-controller-manager-ha-429000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-proxy-2n5cz                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-scheduler-ha-429000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-vip-ha-429000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24m                kube-proxy       
	  Normal  RegisteredNode           24m                node-controller  Node ha-429000-m02 event: Registered Node ha-429000-m02 in Controller
	  Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node ha-429000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node ha-429000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node ha-429000-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24m                node-controller  Node ha-429000-m02 event: Registered Node ha-429000-m02 in Controller
	  Normal  RegisteredNode           20m                node-controller  Node ha-429000-m02 event: Registered Node ha-429000-m02 in Controller
	  Normal  NodeNotReady             3m14s              node-controller  Node ha-429000-m02 status is now: NodeNotReady
	
	
	Name:               ha-429000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-429000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	                    minikube.k8s.io/name=ha-429000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_02_03T11_12_39_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Feb 2025 11:12:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-429000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Feb 2025 11:33:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Feb 2025 11:32:15 +0000   Mon, 03 Feb 2025 11:12:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Feb 2025 11:32:15 +0000   Mon, 03 Feb 2025 11:12:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Feb 2025 11:32:15 +0000   Mon, 03 Feb 2025 11:12:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Feb 2025 11:32:15 +0000   Mon, 03 Feb 2025 11:12:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.0.10
	  Hostname:    ha-429000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 ccbbda46ccd846c5adf54c6a983de246
	  System UUID:                f085c3e8-6dcb-5848-90b1-62afe6e2042e
	  Boot ID:                    2f47af6c-0a16-4c42-abed-4293a55945a2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-hcrnz                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 etcd-ha-429000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kindnet-ss84t                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	  kube-system                 kube-apiserver-ha-429000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-ha-429000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-m9nhx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-429000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-vip-ha-429000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node ha-429000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node ha-429000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node ha-429000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node ha-429000-m03 event: Registered Node ha-429000-m03 in Controller
	  Normal  RegisteredNode           21m                node-controller  Node ha-429000-m03 event: Registered Node ha-429000-m03 in Controller
	  Normal  RegisteredNode           20m                node-controller  Node ha-429000-m03 event: Registered Node ha-429000-m03 in Controller
	
	
	Name:               ha-429000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-429000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	                    minikube.k8s.io/name=ha-429000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_02_03T11_17_39_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Feb 2025 11:17:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-429000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Feb 2025 11:33:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Feb 2025 11:30:45 +0000   Mon, 03 Feb 2025 11:17:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Feb 2025 11:30:45 +0000   Mon, 03 Feb 2025 11:17:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Feb 2025 11:30:45 +0000   Mon, 03 Feb 2025 11:17:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Feb 2025 11:30:45 +0000   Mon, 03 Feb 2025 11:18:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.10.184
	  Hostname:    ha-429000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f55e06ecb3b4a27b8b0f4a3ef61e2e2
	  System UUID:                29aba8d0-6d05-e34d-992c-33f7c4041ed5
	  Boot ID:                    e80170e0-a5cb-44eb-ac55-14c502fafc91
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2fwrm       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-proxy-5gll8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x2 over 16m)  kubelet          Node ha-429000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x2 over 16m)  kubelet          Node ha-429000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x2 over 16m)  kubelet          Node ha-429000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                node-controller  Node ha-429000-m04 event: Registered Node ha-429000-m04 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-429000-m04 event: Registered Node ha-429000-m04 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-429000-m04 event: Registered Node ha-429000-m04 in Controller
	  Normal  NodeReady                15m                kubelet          Node ha-429000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.418749] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Feb 3 11:04] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.162509] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[ +28.900161] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +0.100368] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.496016] systemd-fstab-generator[1048]: Ignoring "noauto" option for root device
	[  +0.204271] systemd-fstab-generator[1060]: Ignoring "noauto" option for root device
	[  +0.223335] systemd-fstab-generator[1074]: Ignoring "noauto" option for root device
	[  +2.838161] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.193016] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.180216] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[  +0.258010] systemd-fstab-generator[1335]: Ignoring "noauto" option for root device
	[Feb 3 11:05] systemd-fstab-generator[1437]: Ignoring "noauto" option for root device
	[  +0.102538] kauditd_printk_skb: 206 callbacks suppressed
	[  +3.803741] systemd-fstab-generator[1703]: Ignoring "noauto" option for root device
	[  +6.271371] systemd-fstab-generator[1850]: Ignoring "noauto" option for root device
	[  +0.108048] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.162332] kauditd_printk_skb: 67 callbacks suppressed
	[  +2.891126] systemd-fstab-generator[2372]: Ignoring "noauto" option for root device
	[  +6.432468] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.700814] kauditd_printk_skb: 29 callbacks suppressed
	[Feb 3 11:08] hrtimer: interrupt took 1197708 ns
	[Feb 3 11:09] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [6c03362e02b8] <==
	{"level":"warn","ts":"2025-02-03T11:33:38.917361Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:38.921761Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:38.924031Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:39.141866Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:39.151450Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:39.159592Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:39.168801Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:39.173562Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:39.177832Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:39.183214Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:39.188767Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:39.193901Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:39.201381Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:39.206965Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:39.211762Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:39.216994Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:39.221715Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:39.230429Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:39.239196Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:39.245049Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:39.249899Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:39.253276Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:39.262982Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:39.270662Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-03T11:33:39.317135Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"81267108d219df0f","from":"81267108d219df0f","remote-peer-id":"a4f71794fcaa9a11","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:33:39 up 30 min,  0 users,  load average: 0.42, 0.41, 0.38
	Linux ha-429000 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [989e99ddf5bb] <==
	I0203 11:33:05.677432       1 main.go:324] Node ha-429000-m04 has CIDR [10.244.3.0/24] 
	I0203 11:33:15.682512       1 main.go:297] Handling node with IPs: map[172.25.12.47:{}]
	I0203 11:33:15.682647       1 main.go:301] handling current node
	I0203 11:33:15.682670       1 main.go:297] Handling node with IPs: map[172.25.13.142:{}]
	I0203 11:33:15.682679       1 main.go:324] Node ha-429000-m02 has CIDR [10.244.1.0/24] 
	I0203 11:33:15.683155       1 main.go:297] Handling node with IPs: map[172.25.0.10:{}]
	I0203 11:33:15.683186       1 main.go:324] Node ha-429000-m03 has CIDR [10.244.2.0/24] 
	I0203 11:33:15.683495       1 main.go:297] Handling node with IPs: map[172.25.10.184:{}]
	I0203 11:33:15.683513       1 main.go:324] Node ha-429000-m04 has CIDR [10.244.3.0/24] 
	I0203 11:33:25.683232       1 main.go:297] Handling node with IPs: map[172.25.12.47:{}]
	I0203 11:33:25.683389       1 main.go:301] handling current node
	I0203 11:33:25.683411       1 main.go:297] Handling node with IPs: map[172.25.13.142:{}]
	I0203 11:33:25.683419       1 main.go:324] Node ha-429000-m02 has CIDR [10.244.1.0/24] 
	I0203 11:33:25.683646       1 main.go:297] Handling node with IPs: map[172.25.0.10:{}]
	I0203 11:33:25.683670       1 main.go:324] Node ha-429000-m03 has CIDR [10.244.2.0/24] 
	I0203 11:33:25.683772       1 main.go:297] Handling node with IPs: map[172.25.10.184:{}]
	I0203 11:33:25.683793       1 main.go:324] Node ha-429000-m04 has CIDR [10.244.3.0/24] 
	I0203 11:33:35.676681       1 main.go:297] Handling node with IPs: map[172.25.12.47:{}]
	I0203 11:33:35.676854       1 main.go:301] handling current node
	I0203 11:33:35.676888       1 main.go:297] Handling node with IPs: map[172.25.13.142:{}]
	I0203 11:33:35.676969       1 main.go:324] Node ha-429000-m02 has CIDR [10.244.1.0/24] 
	I0203 11:33:35.677450       1 main.go:297] Handling node with IPs: map[172.25.0.10:{}]
	I0203 11:33:35.677625       1 main.go:324] Node ha-429000-m03 has CIDR [10.244.2.0/24] 
	I0203 11:33:35.678299       1 main.go:297] Handling node with IPs: map[172.25.10.184:{}]
	I0203 11:33:35.678400       1 main.go:324] Node ha-429000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [36ff8ead4e91] <==
	I0203 11:05:29.736302       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0203 11:05:30.212593       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0203 11:05:30.670994       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0203 11:05:30.697507       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0203 11:05:30.722605       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0203 11:05:35.367513       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0203 11:05:35.516935       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0203 11:12:34.038934       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="PATCH" URI="/api/v1/namespaces/default/events/ha-429000-m03.1820ae58e684cb3f" auditID="022b8383-3d28-4ee5-b198-695f44f6ea74"
	E0203 11:12:34.031741       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="7.2µs" method="PATCH" path="/api/v1/namespaces/default/events/ha-429000-m03.1820ae58e684cb3f" result=null
	E0203 11:12:34.039220       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 6.9µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0203 11:13:46.144176       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58482: use of closed network connection
	E0203 11:13:47.877819       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58484: use of closed network connection
	E0203 11:13:48.331705       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58488: use of closed network connection
	E0203 11:13:48.860010       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58490: use of closed network connection
	E0203 11:13:49.322250       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58492: use of closed network connection
	E0203 11:13:49.763983       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58494: use of closed network connection
	E0203 11:13:50.210765       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58496: use of closed network connection
	E0203 11:13:50.649265       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58498: use of closed network connection
	E0203 11:13:51.109481       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58500: use of closed network connection
	E0203 11:13:51.892960       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58503: use of closed network connection
	E0203 11:14:02.348371       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58505: use of closed network connection
	E0203 11:14:02.797946       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58508: use of closed network connection
	E0203 11:14:13.266251       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58511: use of closed network connection
	E0203 11:14:13.714652       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58513: use of closed network connection
	E0203 11:14:24.150346       1 conn.go:339] Error on socket receive: read tcp 172.25.15.254:8443->172.25.0.1:58515: use of closed network connection
	
	
	==> kube-controller-manager [77604fa1a1e9] <==
	I0203 11:17:41.563315       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m04"
	I0203 11:17:46.395028       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m02"
	I0203 11:17:49.462398       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m04"
	I0203 11:18:08.295405       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m04"
	I0203 11:18:08.299866       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-429000-m04"
	I0203 11:18:08.324246       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m04"
	I0203 11:18:09.875312       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m04"
	I0203 11:18:10.041274       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m04"
	I0203 11:20:20.472386       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000"
	I0203 11:22:04.175475       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m03"
	I0203 11:22:51.495444       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m02"
	I0203 11:25:26.634039       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000"
	I0203 11:25:39.171959       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m04"
	I0203 11:27:09.854637       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m03"
	I0203 11:27:57.497913       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m02"
	I0203 11:30:25.230260       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m02"
	I0203 11:30:25.242631       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-429000-m04"
	I0203 11:30:25.274959       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m02"
	I0203 11:30:25.299272       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="35.423584ms"
	I0203 11:30:25.301208       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="46.901µs"
	I0203 11:30:25.420100       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m02"
	I0203 11:30:30.522147       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m02"
	I0203 11:30:32.123415       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000"
	I0203 11:30:45.785855       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m04"
	I0203 11:32:15.995886       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-429000-m03"
	
	
	==> kube-proxy [3ad219fbdb56] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0203 11:05:38.307640       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0203 11:05:38.322687       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.12.47"]
	E0203 11:05:38.322860       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0203 11:05:38.411214       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0203 11:05:38.411297       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0203 11:05:38.411328       1 server_linux.go:170] "Using iptables Proxier"
	I0203 11:05:38.432366       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0203 11:05:38.432788       1 server.go:497] "Version info" version="v1.32.1"
	I0203 11:05:38.432826       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 11:05:38.436253       1 config.go:199] "Starting service config controller"
	I0203 11:05:38.436274       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0203 11:05:38.436296       1 config.go:105] "Starting endpoint slice config controller"
	I0203 11:05:38.436301       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0203 11:05:38.436778       1 config.go:329] "Starting node config controller"
	I0203 11:05:38.436788       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0203 11:05:38.537329       1 shared_informer.go:320] Caches are synced for node config
	I0203 11:05:38.537365       1 shared_informer.go:320] Caches are synced for service config
	I0203 11:05:38.537376       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4c387526ccbe] <==
	W0203 11:05:28.617464       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0203 11:05:28.617855       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 11:05:28.622391       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0203 11:05:28.622621       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0203 11:05:28.632706       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0203 11:05:28.633284       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 11:05:28.694851       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0203 11:05:28.695199       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 11:05:28.768345       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0203 11:05:28.768637       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0203 11:05:28.778729       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0203 11:05:28.778926       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0203 11:05:28.805838       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0203 11:05:28.805973       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0203 11:05:28.806263       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0203 11:05:28.806460       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 11:05:28.809799       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0203 11:05:28.809845       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 11:05:28.821455       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0203 11:05:28.821676       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 11:05:30.341351       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0203 11:13:39.241707       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-58667487b6-hcrnz\": pod busybox-58667487b6-hcrnz is already assigned to node \"ha-429000-m03\"" plugin="DefaultBinder" pod="default/busybox-58667487b6-hcrnz" node="ha-429000-m03"
	E0203 11:13:39.251063       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod f71e2d64-2c7a-460d-b2c5-82f234c46aec(default/busybox-58667487b6-hcrnz) wasn't assumed so cannot be forgotten" pod="default/busybox-58667487b6-hcrnz"
	E0203 11:13:39.251390       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-58667487b6-hcrnz\": pod busybox-58667487b6-hcrnz is already assigned to node \"ha-429000-m03\"" pod="default/busybox-58667487b6-hcrnz"
	I0203 11:13:39.251706       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-58667487b6-hcrnz" node="ha-429000-m03"
	
	
	==> kubelet <==
	Feb 03 11:29:30 ha-429000 kubelet[2379]: E0203 11:29:30.794883    2379 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 03 11:29:30 ha-429000 kubelet[2379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 03 11:29:30 ha-429000 kubelet[2379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 03 11:29:30 ha-429000 kubelet[2379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 03 11:29:30 ha-429000 kubelet[2379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 03 11:30:30 ha-429000 kubelet[2379]: E0203 11:30:30.798654    2379 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 03 11:30:30 ha-429000 kubelet[2379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 03 11:30:30 ha-429000 kubelet[2379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 03 11:30:30 ha-429000 kubelet[2379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 03 11:30:30 ha-429000 kubelet[2379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 03 11:31:30 ha-429000 kubelet[2379]: E0203 11:31:30.795414    2379 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 03 11:31:30 ha-429000 kubelet[2379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 03 11:31:30 ha-429000 kubelet[2379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 03 11:31:30 ha-429000 kubelet[2379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 03 11:31:30 ha-429000 kubelet[2379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 03 11:32:30 ha-429000 kubelet[2379]: E0203 11:32:30.795843    2379 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 03 11:32:30 ha-429000 kubelet[2379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 03 11:32:30 ha-429000 kubelet[2379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 03 11:32:30 ha-429000 kubelet[2379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 03 11:32:30 ha-429000 kubelet[2379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 03 11:33:30 ha-429000 kubelet[2379]: E0203 11:33:30.796046    2379 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 03 11:33:30 ha-429000 kubelet[2379]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 03 11:33:30 ha-429000 kubelet[2379]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 03 11:33:30 ha-429000 kubelet[2379]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 03 11:33:30 ha-429000 kubelet[2379]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-429000 -n ha-429000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-429000 -n ha-429000: (11.4055903s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-429000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (162.33s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (53.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-749300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-749300 -- exec busybox-58667487b6-c66bf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-749300 -- exec busybox-58667487b6-c66bf -- sh -c "ping -c 1 172.25.0.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-749300 -- exec busybox-58667487b6-c66bf -- sh -c "ping -c 1 172.25.0.1": exit status 1 (10.4251809s)

                                                
                                                
-- stdout --
	PING 172.25.0.1 (172.25.0.1): 56 data bytes
	
	--- 172.25.0.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.25.0.1) from pod (busybox-58667487b6-c66bf): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-749300 -- exec busybox-58667487b6-zgvmd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-749300 -- exec busybox-58667487b6-zgvmd -- sh -c "ping -c 1 172.25.0.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-749300 -- exec busybox-58667487b6-zgvmd -- sh -c "ping -c 1 172.25.0.1": exit status 1 (10.4246741s)

                                                
                                                
-- stdout --
	PING 172.25.0.1 (172.25.0.1): 56 data bytes
	
	--- 172.25.0.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.25.0.1) from pod (busybox-58667487b6-zgvmd): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-749300 -n multinode-749300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-749300 -n multinode-749300: (11.1494088s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 logs -n 25: (7.8424577s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-261800 ssh -- ls                    | mount-start-2-261800 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:58 UTC | 03 Feb 25 11:58 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-261800                           | mount-start-1-261800 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:58 UTC | 03 Feb 25 11:58 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-261800 ssh -- ls                    | mount-start-2-261800 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:58 UTC | 03 Feb 25 11:59 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-261800                           | mount-start-2-261800 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:59 UTC | 03 Feb 25 11:59 UTC |
	| start   | -p mount-start-2-261800                           | mount-start-2-261800 | minikube5\jenkins | v1.35.0 | 03 Feb 25 11:59 UTC | 03 Feb 25 12:01 UTC |
	| mount   | C:\Users\jenkins.minikube5:/minikube-host         | mount-start-2-261800 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:01 UTC |                     |
	|         | --profile mount-start-2-261800 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-261800 ssh -- ls                    | mount-start-2-261800 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:01 UTC | 03 Feb 25 12:01 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-261800                           | mount-start-2-261800 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:01 UTC | 03 Feb 25 12:02 UTC |
	| delete  | -p mount-start-1-261800                           | mount-start-1-261800 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:02 UTC | 03 Feb 25 12:02 UTC |
	| start   | -p multinode-749300                               | multinode-749300     | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:02 UTC | 03 Feb 25 12:08 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-749300 -- apply -f                   | multinode-749300     | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:08 UTC | 03 Feb 25 12:08 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-749300 -- rollout                    | multinode-749300     | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:08 UTC | 03 Feb 25 12:08 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-749300 -- get pods -o                | multinode-749300     | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:08 UTC | 03 Feb 25 12:08 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-749300 -- get pods -o                | multinode-749300     | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:08 UTC | 03 Feb 25 12:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-749300 -- exec                       | multinode-749300     | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:08 UTC | 03 Feb 25 12:08 UTC |
	|         | busybox-58667487b6-c66bf --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-749300 -- exec                       | multinode-749300     | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:08 UTC | 03 Feb 25 12:08 UTC |
	|         | busybox-58667487b6-zgvmd --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-749300 -- exec                       | multinode-749300     | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:08 UTC | 03 Feb 25 12:08 UTC |
	|         | busybox-58667487b6-c66bf --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-749300 -- exec                       | multinode-749300     | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:08 UTC | 03 Feb 25 12:08 UTC |
	|         | busybox-58667487b6-zgvmd --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-749300 -- exec                       | multinode-749300     | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:08 UTC | 03 Feb 25 12:08 UTC |
	|         | busybox-58667487b6-c66bf -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-749300 -- exec                       | multinode-749300     | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:08 UTC | 03 Feb 25 12:08 UTC |
	|         | busybox-58667487b6-zgvmd -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-749300 -- get pods -o                | multinode-749300     | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:08 UTC | 03 Feb 25 12:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-749300 -- exec                       | multinode-749300     | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:08 UTC | 03 Feb 25 12:08 UTC |
	|         | busybox-58667487b6-c66bf                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-749300 -- exec                       | multinode-749300     | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:08 UTC |                     |
	|         | busybox-58667487b6-c66bf -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.0.1                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-749300 -- exec                       | multinode-749300     | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:09 UTC | 03 Feb 25 12:09 UTC |
	|         | busybox-58667487b6-zgvmd                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-749300 -- exec                       | multinode-749300     | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:09 UTC |                     |
	|         | busybox-58667487b6-zgvmd -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.0.1                           |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/03 12:02:01
	Running on machine: minikube5
	Binary: Built with gc go1.23.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 12:02:01.048607   11844 out.go:345] Setting OutFile to fd 1848 ...
	I0203 12:02:01.104501   11844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 12:02:01.104501   11844 out.go:358] Setting ErrFile to fd 1912...
	I0203 12:02:01.104501   11844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 12:02:01.132566   11844 out.go:352] Setting JSON to false
	I0203 12:02:01.137365   11844 start.go:129] hostinfo: {"hostname":"minikube5","uptime":168722,"bootTime":1738415398,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5371 Build 19045.5371","kernelVersion":"10.0.19045.5371 Build 19045.5371","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0203 12:02:01.137365   11844 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0203 12:02:01.142368   11844 out.go:177] * [multinode-749300] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	I0203 12:02:01.146156   11844 notify.go:220] Checking for updates...
	I0203 12:02:01.148750   11844 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 12:02:01.151365   11844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 12:02:01.154005   11844 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0203 12:02:01.156002   11844 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 12:02:01.158830   11844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 12:02:01.162826   11844 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:02:01.163690   11844 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 12:02:06.204527   11844 out.go:177] * Using the hyperv driver based on user configuration
	I0203 12:02:06.208480   11844 start.go:297] selected driver: hyperv
	I0203 12:02:06.208480   11844 start.go:901] validating driver "hyperv" against <nil>
	I0203 12:02:06.208586   11844 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 12:02:06.255056   11844 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0203 12:02:06.256066   11844 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 12:02:06.256405   11844 cni.go:84] Creating CNI manager for ""
	I0203 12:02:06.256405   11844 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0203 12:02:06.256405   11844 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0203 12:02:06.256604   11844 start.go:340] cluster config:
	{Name:multinode-749300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-749300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 12:02:06.256871   11844 iso.go:125] acquiring lock: {Name:mkae681ee414e9275e9685c6bbf5080b17ead976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 12:02:06.260070   11844 out.go:177] * Starting "multinode-749300" primary control-plane node in "multinode-749300" cluster
	I0203 12:02:06.262405   11844 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 12:02:06.263145   11844 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0203 12:02:06.263204   11844 cache.go:56] Caching tarball of preloaded images
	I0203 12:02:06.263204   11844 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 12:02:06.263204   11844 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0203 12:02:06.263802   11844 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\config.json ...
	I0203 12:02:06.264066   11844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\config.json: {Name:mke5d91096fbb48ba0cf6abe59e9b6525eed04bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:02:06.264307   11844 start.go:360] acquireMachinesLock for multinode-749300: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 12:02:06.265134   11844 start.go:364] duration metric: took 827.3µs to acquireMachinesLock for "multinode-749300"
	I0203 12:02:06.265268   11844 start.go:93] Provisioning new machine with config: &{Name:multinode-749300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-749300
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 12:02:06.265320   11844 start.go:125] createHost starting for "" (driver="hyperv")
	I0203 12:02:06.268896   11844 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0203 12:02:06.269231   11844 start.go:159] libmachine.API.Create for "multinode-749300" (driver="hyperv")
	I0203 12:02:06.269275   11844 client.go:168] LocalClient.Create starting
	I0203 12:02:06.269753   11844 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0203 12:02:06.269929   11844 main.go:141] libmachine: Decoding PEM data...
	I0203 12:02:06.269973   11844 main.go:141] libmachine: Parsing certificate...
	I0203 12:02:06.270194   11844 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0203 12:02:06.270372   11844 main.go:141] libmachine: Decoding PEM data...
	I0203 12:02:06.270460   11844 main.go:141] libmachine: Parsing certificate...
	I0203 12:02:06.270558   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0203 12:02:08.200990   11844 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0203 12:02:08.201069   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:02:08.201069   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0203 12:02:09.791312   11844 main.go:141] libmachine: [stdout =====>] : False
	
	I0203 12:02:09.791950   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:02:09.792028   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0203 12:02:11.229866   11844 main.go:141] libmachine: [stdout =====>] : True
	
	I0203 12:02:11.229866   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:02:11.229866   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0203 12:02:14.615585   11844 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0203 12:02:14.615585   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:02:14.617550   11844 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0203 12:02:15.060946   11844 main.go:141] libmachine: Creating SSH key...
	I0203 12:02:15.463225   11844 main.go:141] libmachine: Creating VM...
	I0203 12:02:15.463225   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0203 12:02:18.129849   11844 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0203 12:02:18.130692   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:02:18.130692   11844 main.go:141] libmachine: Using switch "Default Switch"
	I0203 12:02:18.130888   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0203 12:02:19.792456   11844 main.go:141] libmachine: [stdout =====>] : True
	
	I0203 12:02:19.792456   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:02:19.792678   11844 main.go:141] libmachine: Creating VHD
	I0203 12:02:19.792678   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0203 12:02:23.379189   11844 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 62523B15-5004-419C-B6B5-B22F0CC66B1E
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0203 12:02:23.379461   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:02:23.379461   11844 main.go:141] libmachine: Writing magic tar header
	I0203 12:02:23.379461   11844 main.go:141] libmachine: Writing SSH key tar header
	I0203 12:02:23.390688   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0203 12:02:26.420088   11844 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:02:26.420167   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:02:26.420167   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\disk.vhd' -SizeBytes 20000MB
	I0203 12:02:28.799454   11844 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:02:28.799548   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:02:28.799639   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-749300 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0203 12:02:32.161442   11844 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-749300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0203 12:02:32.161965   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:02:32.162062   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-749300 -DynamicMemoryEnabled $false
	I0203 12:02:34.262823   11844 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:02:34.262823   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:02:34.262900   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-749300 -Count 2
	I0203 12:02:36.313776   11844 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:02:36.314760   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:02:36.314760   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-749300 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\boot2docker.iso'
	I0203 12:02:38.688471   11844 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:02:38.688471   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:02:38.688471   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-749300 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\disk.vhd'
	I0203 12:02:41.109587   11844 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:02:41.109587   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:02:41.109587   11844 main.go:141] libmachine: Starting VM...
	I0203 12:02:41.109587   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-749300
	I0203 12:02:43.978757   11844 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:02:43.978757   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:02:43.978757   11844 main.go:141] libmachine: Waiting for host to start...
	I0203 12:02:43.979496   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:02:46.082418   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:02:46.082490   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:02:46.082551   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:02:48.344887   11844 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:02:48.344887   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:02:49.345779   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:02:51.343297   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:02:51.343790   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:02:51.343956   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:02:53.608512   11844 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:02:53.608512   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:02:54.609515   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:02:56.635968   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:02:56.635968   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:02:56.635968   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:02:58.944601   11844 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:02:58.944601   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:02:59.946118   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:03:01.954694   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:03:01.954694   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:01.954795   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:03:04.230317   11844 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:03:04.230317   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:05.231996   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:03:07.239743   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:03:07.239743   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:07.240305   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:03:09.804912   11844 main.go:141] libmachine: [stdout =====>] : 172.25.1.53
	
	I0203 12:03:09.805632   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:09.805671   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:03:11.794234   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:03:11.794309   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:11.794309   11844 machine.go:93] provisionDockerMachine start ...
	I0203 12:03:11.794465   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:03:13.775445   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:03:13.775445   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:13.776198   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:03:16.112852   11844 main.go:141] libmachine: [stdout =====>] : 172.25.1.53
	
	I0203 12:03:16.112852   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:16.117659   11844 main.go:141] libmachine: Using SSH client type: native
	I0203 12:03:16.131963   11844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.1.53 22 <nil> <nil>}
	I0203 12:03:16.131963   11844 main.go:141] libmachine: About to run SSH command:
	hostname
	I0203 12:03:16.254110   11844 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0203 12:03:16.254222   11844 buildroot.go:166] provisioning hostname "multinode-749300"
	I0203 12:03:16.254303   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:03:18.204106   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:03:18.204106   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:18.204106   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:03:20.567731   11844 main.go:141] libmachine: [stdout =====>] : 172.25.1.53
	
	I0203 12:03:20.568182   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:20.573797   11844 main.go:141] libmachine: Using SSH client type: native
	I0203 12:03:20.574443   11844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.1.53 22 <nil> <nil>}
	I0203 12:03:20.574443   11844 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-749300 && echo "multinode-749300" | sudo tee /etc/hostname
	I0203 12:03:20.731382   11844 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-749300
	
	I0203 12:03:20.731534   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:03:22.684506   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:03:22.684506   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:22.684506   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:03:25.008236   11844 main.go:141] libmachine: [stdout =====>] : 172.25.1.53
	
	I0203 12:03:25.008236   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:25.012873   11844 main.go:141] libmachine: Using SSH client type: native
	I0203 12:03:25.013342   11844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.1.53 22 <nil> <nil>}
	I0203 12:03:25.013342   11844 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-749300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-749300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-749300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 12:03:25.168498   11844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 12:03:25.168498   11844 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0203 12:03:25.168498   11844 buildroot.go:174] setting up certificates
	I0203 12:03:25.168498   11844 provision.go:84] configureAuth start
	I0203 12:03:25.168498   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:03:27.145202   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:03:27.145691   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:27.145838   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:03:29.521423   11844 main.go:141] libmachine: [stdout =====>] : 172.25.1.53
	
	I0203 12:03:29.522468   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:29.522468   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:03:31.501348   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:03:31.501424   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:31.501424   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:03:33.898412   11844 main.go:141] libmachine: [stdout =====>] : 172.25.1.53
	
	I0203 12:03:33.898412   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:33.898412   11844 provision.go:143] copyHostCerts
	I0203 12:03:33.898412   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0203 12:03:33.898412   11844 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0203 12:03:33.898412   11844 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0203 12:03:33.899097   11844 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0203 12:03:33.899719   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0203 12:03:33.900322   11844 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0203 12:03:33.900322   11844 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0203 12:03:33.900322   11844 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0203 12:03:33.901044   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0203 12:03:33.901667   11844 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0203 12:03:33.901667   11844 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0203 12:03:33.901667   11844 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0203 12:03:33.902365   11844 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-749300 san=[127.0.0.1 172.25.1.53 localhost minikube multinode-749300]
	I0203 12:03:34.419383   11844 provision.go:177] copyRemoteCerts
	I0203 12:03:34.428538   11844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 12:03:34.428538   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:03:36.419687   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:03:36.419687   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:36.419687   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:03:38.779825   11844 main.go:141] libmachine: [stdout =====>] : 172.25.1.53
	
	I0203 12:03:38.779825   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:38.780510   11844 sshutil.go:53] new ssh client: &{IP:172.25.1.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\id_rsa Username:docker}
	I0203 12:03:38.896233   11844 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4675385s)
	I0203 12:03:38.896349   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0203 12:03:38.896738   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0203 12:03:38.944812   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0203 12:03:38.944812   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0203 12:03:38.991932   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0203 12:03:38.992351   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0203 12:03:39.038047   11844 provision.go:87] duration metric: took 13.8693921s to configureAuth
	I0203 12:03:39.038047   11844 buildroot.go:189] setting minikube options for container-runtime
	I0203 12:03:39.039004   11844 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:03:39.039083   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:03:40.998545   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:03:40.998545   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:40.998545   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:03:43.383253   11844 main.go:141] libmachine: [stdout =====>] : 172.25.1.53
	
	I0203 12:03:43.383528   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:43.387175   11844 main.go:141] libmachine: Using SSH client type: native
	I0203 12:03:43.387771   11844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.1.53 22 <nil> <nil>}
	I0203 12:03:43.387771   11844 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 12:03:43.524553   11844 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0203 12:03:43.524553   11844 buildroot.go:70] root file system type: tmpfs
	I0203 12:03:43.524553   11844 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 12:03:43.524553   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:03:45.495399   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:03:45.495399   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:45.495399   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:03:47.883643   11844 main.go:141] libmachine: [stdout =====>] : 172.25.1.53
	
	I0203 12:03:47.883729   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:47.887989   11844 main.go:141] libmachine: Using SSH client type: native
	I0203 12:03:47.887989   11844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.1.53 22 <nil> <nil>}
	I0203 12:03:47.887989   11844 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 12:03:48.046596   11844 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 12:03:48.046668   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:03:50.023515   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:03:50.023744   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:50.023744   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:03:52.406083   11844 main.go:141] libmachine: [stdout =====>] : 172.25.1.53
	
	I0203 12:03:52.406083   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:52.410184   11844 main.go:141] libmachine: Using SSH client type: native
	I0203 12:03:52.410274   11844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.1.53 22 <nil> <nil>}
	I0203 12:03:52.410274   11844 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 12:03:54.617183   11844 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0203 12:03:54.617183   11844 machine.go:96] duration metric: took 42.8223899s to provisionDockerMachine
	I0203 12:03:54.617183   11844 client.go:171] duration metric: took 1m48.3466906s to LocalClient.Create
	I0203 12:03:54.617183   11844 start.go:167] duration metric: took 1m48.3467811s to libmachine.API.Create "multinode-749300"
	I0203 12:03:54.617183   11844 start.go:293] postStartSetup for "multinode-749300" (driver="hyperv")
	I0203 12:03:54.617183   11844 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 12:03:54.626438   11844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 12:03:54.626438   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:03:56.615201   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:03:56.615201   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:56.615722   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:03:58.931125   11844 main.go:141] libmachine: [stdout =====>] : 172.25.1.53
	
	I0203 12:03:58.931624   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:03:58.931897   11844 sshutil.go:53] new ssh client: &{IP:172.25.1.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\id_rsa Username:docker}
	I0203 12:03:59.043461   11844 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4169733s)
	I0203 12:03:59.052937   11844 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 12:03:59.059163   11844 command_runner.go:130] > NAME=Buildroot
	I0203 12:03:59.059163   11844 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0203 12:03:59.059163   11844 command_runner.go:130] > ID=buildroot
	I0203 12:03:59.059163   11844 command_runner.go:130] > VERSION_ID=2023.02.9
	I0203 12:03:59.059163   11844 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0203 12:03:59.059163   11844 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 12:03:59.059163   11844 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0203 12:03:59.059909   11844 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0203 12:03:59.060434   11844 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> 54522.pem in /etc/ssl/certs
	I0203 12:03:59.060513   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /etc/ssl/certs/54522.pem
	I0203 12:03:59.068671   11844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 12:03:59.086403   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /etc/ssl/certs/54522.pem (1708 bytes)
	I0203 12:03:59.130957   11844 start.go:296] duration metric: took 4.5137232s for postStartSetup
	I0203 12:03:59.133382   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:04:01.085613   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:04:01.085613   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:04:01.086525   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:04:03.532942   11844 main.go:141] libmachine: [stdout =====>] : 172.25.1.53
	
	I0203 12:04:03.532942   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:04:03.533297   11844 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\config.json ...
	I0203 12:04:03.535226   11844 start.go:128] duration metric: took 1m57.2685869s to createHost
	I0203 12:04:03.535226   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:04:05.499851   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:04:05.499851   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:04:05.500972   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:04:07.833357   11844 main.go:141] libmachine: [stdout =====>] : 172.25.1.53
	
	I0203 12:04:07.833357   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:04:07.838326   11844 main.go:141] libmachine: Using SSH client type: native
	I0203 12:04:07.838905   11844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.1.53 22 <nil> <nil>}
	I0203 12:04:07.838905   11844 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0203 12:04:07.965843   11844 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738584247.980646162
	
	I0203 12:04:07.965843   11844 fix.go:216] guest clock: 1738584247.980646162
	I0203 12:04:07.965843   11844 fix.go:229] Guest: 2025-02-03 12:04:07.980646162 +0000 UTC Remote: 2025-02-03 12:04:03.5352262 +0000 UTC m=+122.562274301 (delta=4.445419962s)
	I0203 12:04:07.965843   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:04:09.931857   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:04:09.931857   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:04:09.932193   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:04:12.298068   11844 main.go:141] libmachine: [stdout =====>] : 172.25.1.53
	
	I0203 12:04:12.298068   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:04:12.302891   11844 main.go:141] libmachine: Using SSH client type: native
	I0203 12:04:12.303360   11844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.1.53 22 <nil> <nil>}
	I0203 12:04:12.303360   11844 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1738584247
	I0203 12:04:12.450687   11844 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb  3 12:04:07 UTC 2025
	
	I0203 12:04:12.450687   11844 fix.go:236] clock set: Mon Feb  3 12:04:07 UTC 2025
	 (err=<nil>)
	I0203 12:04:12.450687   11844 start.go:83] releasing machines lock for "multinode-749300", held for 2m6.1841334s
	I0203 12:04:12.451873   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:04:14.399649   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:04:14.399649   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:04:14.399751   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:04:16.789316   11844 main.go:141] libmachine: [stdout =====>] : 172.25.1.53
	
	I0203 12:04:16.789504   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:04:16.793624   11844 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0203 12:04:16.793748   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:04:16.800784   11844 ssh_runner.go:195] Run: cat /version.json
	I0203 12:04:16.800784   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:04:18.855211   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:04:18.855211   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:04:18.855944   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:04:18.855944   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:04:18.855944   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:04:18.855944   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:04:21.279807   11844 main.go:141] libmachine: [stdout =====>] : 172.25.1.53
	
	I0203 12:04:21.279807   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:04:21.279807   11844 sshutil.go:53] new ssh client: &{IP:172.25.1.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\id_rsa Username:docker}
	I0203 12:04:21.308857   11844 main.go:141] libmachine: [stdout =====>] : 172.25.1.53
	
	I0203 12:04:21.308857   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:04:21.308857   11844 sshutil.go:53] new ssh client: &{IP:172.25.1.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\id_rsa Username:docker}
	I0203 12:04:21.368069   11844 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0203 12:04:21.368592   11844 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.574854s)
	W0203 12:04:21.368592   11844 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0203 12:04:21.400115   11844 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0203 12:04:21.400360   11844 ssh_runner.go:235] Completed: cat /version.json: (4.5995241s)
	I0203 12:04:21.408167   11844 ssh_runner.go:195] Run: systemctl --version
	I0203 12:04:21.417907   11844 command_runner.go:130] > systemd 252 (252)
	I0203 12:04:21.417907   11844 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0203 12:04:21.426742   11844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0203 12:04:21.435676   11844 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0203 12:04:21.436197   11844 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 12:04:21.445560   11844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 12:04:21.480191   11844 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0203 12:04:21.480191   11844 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0203 12:04:21.480191   11844 start.go:495] detecting cgroup driver to use...
	I0203 12:04:21.481450   11844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0203 12:04:21.485506   11844 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0203 12:04:21.485506   11844 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0203 12:04:21.520755   11844 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0203 12:04:21.528274   11844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0203 12:04:21.555890   11844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0203 12:04:21.574271   11844 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 12:04:21.581818   11844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0203 12:04:21.610996   11844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 12:04:21.640143   11844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 12:04:21.666702   11844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 12:04:21.696361   11844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 12:04:21.724764   11844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 12:04:21.751801   11844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0203 12:04:21.779378   11844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0203 12:04:21.805840   11844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 12:04:21.822663   11844 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 12:04:21.823681   11844 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 12:04:21.832160   11844 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0203 12:04:21.861926   11844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 12:04:21.884481   11844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:04:22.060406   11844 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 12:04:22.092667   11844 start.go:495] detecting cgroup driver to use...
	I0203 12:04:22.100137   11844 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 12:04:22.120718   11844 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0203 12:04:22.120718   11844 command_runner.go:130] > [Unit]
	I0203 12:04:22.120801   11844 command_runner.go:130] > Description=Docker Application Container Engine
	I0203 12:04:22.120801   11844 command_runner.go:130] > Documentation=https://docs.docker.com
	I0203 12:04:22.120801   11844 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0203 12:04:22.120801   11844 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0203 12:04:22.120801   11844 command_runner.go:130] > StartLimitBurst=3
	I0203 12:04:22.120801   11844 command_runner.go:130] > StartLimitIntervalSec=60
	I0203 12:04:22.120880   11844 command_runner.go:130] > [Service]
	I0203 12:04:22.120914   11844 command_runner.go:130] > Type=notify
	I0203 12:04:22.120930   11844 command_runner.go:130] > Restart=on-failure
	I0203 12:04:22.120930   11844 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0203 12:04:22.120930   11844 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0203 12:04:22.120930   11844 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0203 12:04:22.120930   11844 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0203 12:04:22.121008   11844 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0203 12:04:22.121028   11844 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0203 12:04:22.121028   11844 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0203 12:04:22.121028   11844 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0203 12:04:22.121028   11844 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0203 12:04:22.121103   11844 command_runner.go:130] > ExecStart=
	I0203 12:04:22.121103   11844 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0203 12:04:22.121103   11844 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0203 12:04:22.121175   11844 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0203 12:04:22.121175   11844 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0203 12:04:22.121175   11844 command_runner.go:130] > LimitNOFILE=infinity
	I0203 12:04:22.121175   11844 command_runner.go:130] > LimitNPROC=infinity
	I0203 12:04:22.121175   11844 command_runner.go:130] > LimitCORE=infinity
	I0203 12:04:22.121242   11844 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0203 12:04:22.121242   11844 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0203 12:04:22.121242   11844 command_runner.go:130] > TasksMax=infinity
	I0203 12:04:22.121242   11844 command_runner.go:130] > TimeoutStartSec=0
	I0203 12:04:22.121242   11844 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0203 12:04:22.121242   11844 command_runner.go:130] > Delegate=yes
	I0203 12:04:22.121309   11844 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0203 12:04:22.121309   11844 command_runner.go:130] > KillMode=process
	I0203 12:04:22.121309   11844 command_runner.go:130] > [Install]
	I0203 12:04:22.121309   11844 command_runner.go:130] > WantedBy=multi-user.target
	I0203 12:04:22.130465   11844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 12:04:22.159825   11844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 12:04:22.194760   11844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 12:04:22.226785   11844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 12:04:22.257854   11844 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0203 12:04:22.320880   11844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 12:04:22.345762   11844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 12:04:22.379259   11844 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0203 12:04:22.388366   11844 ssh_runner.go:195] Run: which cri-dockerd
	I0203 12:04:22.395070   11844 command_runner.go:130] > /usr/bin/cri-dockerd
	I0203 12:04:22.403230   11844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0203 12:04:22.422042   11844 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0203 12:04:22.461982   11844 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 12:04:22.643979   11844 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 12:04:22.824883   11844 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 12:04:22.825205   11844 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0203 12:04:22.867305   11844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:04:23.060409   11844 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 12:04:25.675614   11844 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6151751s)
	I0203 12:04:25.684826   11844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0203 12:04:25.716763   11844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 12:04:25.747635   11844 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0203 12:04:25.941890   11844 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 12:04:26.139023   11844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:04:26.346086   11844 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0203 12:04:26.384388   11844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 12:04:26.417942   11844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:04:26.613030   11844 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0203 12:04:26.718208   11844 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0203 12:04:26.727064   11844 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0203 12:04:26.735491   11844 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0203 12:04:26.735491   11844 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0203 12:04:26.735491   11844 command_runner.go:130] > Device: 0,22	Inode: 884         Links: 1
	I0203 12:04:26.735491   11844 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0203 12:04:26.735491   11844 command_runner.go:130] > Access: 2025-02-03 12:04:26.654388416 +0000
	I0203 12:04:26.735491   11844 command_runner.go:130] > Modify: 2025-02-03 12:04:26.654388416 +0000
	I0203 12:04:26.735491   11844 command_runner.go:130] > Change: 2025-02-03 12:04:26.658388427 +0000
	I0203 12:04:26.735491   11844 command_runner.go:130] >  Birth: -
	I0203 12:04:26.735491   11844 start.go:563] Will wait 60s for crictl version
	I0203 12:04:26.742491   11844 ssh_runner.go:195] Run: which crictl
	I0203 12:04:26.748852   11844 command_runner.go:130] > /usr/bin/crictl
	I0203 12:04:26.756898   11844 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 12:04:26.808001   11844 command_runner.go:130] > Version:  0.1.0
	I0203 12:04:26.808001   11844 command_runner.go:130] > RuntimeName:  docker
	I0203 12:04:26.808001   11844 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0203 12:04:26.808082   11844 command_runner.go:130] > RuntimeApiVersion:  v1
	I0203 12:04:26.808191   11844 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0203 12:04:26.814744   11844 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 12:04:26.848142   11844 command_runner.go:130] > 27.4.0
	I0203 12:04:26.856333   11844 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 12:04:26.895473   11844 command_runner.go:130] > 27.4.0
	I0203 12:04:26.898943   11844 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0203 12:04:26.899068   11844 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0203 12:04:26.903051   11844 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0203 12:04:26.903576   11844 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0203 12:04:26.903576   11844 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0203 12:04:26.903576   11844 ip.go:211] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:37:32:ac Flags:up|broadcast|multicast|running}
	I0203 12:04:26.905845   11844 ip.go:214] interface addr: fe80::c77d:5c4b:3bd9:9577/64
	I0203 12:04:26.905845   11844 ip.go:214] interface addr: 172.25.0.1/20
	I0203 12:04:26.913447   11844 ssh_runner.go:195] Run: grep 172.25.0.1	host.minikube.internal$ /etc/hosts
	I0203 12:04:26.920107   11844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 12:04:26.942675   11844 kubeadm.go:883] updating cluster {Name:multinode-749300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-749300 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.1.53 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0203 12:04:26.942923   11844 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 12:04:26.950390   11844 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 12:04:26.977688   11844 docker.go:689] Got preloaded images: 
	I0203 12:04:26.977688   11844 docker.go:695] registry.k8s.io/kube-apiserver:v1.32.1 wasn't preloaded
	I0203 12:04:26.986483   11844 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0203 12:04:27.004116   11844 command_runner.go:139] > {"Repositories":{}}
	I0203 12:04:27.012275   11844 ssh_runner.go:195] Run: which lz4
	I0203 12:04:27.018657   11844 command_runner.go:130] > /usr/bin/lz4
	I0203 12:04:27.018816   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0203 12:04:27.027622   11844 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0203 12:04:27.033874   11844 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0203 12:04:27.033874   11844 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0203 12:04:27.034049   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (349810983 bytes)
	I0203 12:04:28.824155   11844 docker.go:653] duration metric: took 1.8051106s to copy over tarball
	I0203 12:04:28.831797   11844 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0203 12:04:37.198350   11844 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.3664579s)
	I0203 12:04:37.198350   11844 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0203 12:04:37.265292   11844 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0203 12:04:37.287997   11844 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.3":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.16-0":"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5":"sha256:a9e7e6b294baf1695fccb862d95
6c5d3ad8510e1e4ca1535f35dc09f247abbfc"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.32.1":"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac":"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.32.1":"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954":"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.32.1":"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5":"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102
161f1ded087897a"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.32.1":"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e":"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.10":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136"}}}
	I0203 12:04:37.288059   11844 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0203 12:04:37.328503   11844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:04:37.513933   11844 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 12:04:40.779851   11844 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.2658813s)
	I0203 12:04:40.788608   11844 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 12:04:40.816264   11844 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.1
	I0203 12:04:40.816330   11844 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.1
	I0203 12:04:40.816330   11844 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.1
	I0203 12:04:40.816330   11844 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.1
	I0203 12:04:40.816395   11844 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0203 12:04:40.816395   11844 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0203 12:04:40.816395   11844 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0203 12:04:40.816395   11844 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 12:04:40.816458   11844 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0203 12:04:40.816563   11844 cache_images.go:84] Images are preloaded, skipping loading
	I0203 12:04:40.816563   11844 kubeadm.go:934] updating node { 172.25.1.53 8443 v1.32.1 docker true true} ...
	I0203 12:04:40.816803   11844 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-749300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.1.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:multinode-749300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0203 12:04:40.825047   11844 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0203 12:04:40.890673   11844 command_runner.go:130] > cgroupfs
	I0203 12:04:40.890775   11844 cni.go:84] Creating CNI manager for ""
	I0203 12:04:40.890775   11844 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0203 12:04:40.890775   11844 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0203 12:04:40.890775   11844 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.1.53 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-749300 NodeName:multinode-749300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.1.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.1.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0203 12:04:40.890775   11844 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.1.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-749300"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.25.1.53"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.1.53"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 12:04:40.899880   11844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0203 12:04:40.917228   11844 command_runner.go:130] > kubeadm
	I0203 12:04:40.917228   11844 command_runner.go:130] > kubectl
	I0203 12:04:40.917228   11844 command_runner.go:130] > kubelet
	I0203 12:04:40.918218   11844 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 12:04:40.927027   11844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 12:04:40.944313   11844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0203 12:04:40.974703   11844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 12:04:41.005430   11844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0203 12:04:41.040766   11844 ssh_runner.go:195] Run: grep 172.25.1.53	control-plane.minikube.internal$ /etc/hosts
	I0203 12:04:41.053016   11844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.1.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 12:04:41.080142   11844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:04:41.280574   11844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 12:04:41.308868   11844 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300 for IP: 172.25.1.53
	I0203 12:04:41.308868   11844 certs.go:194] generating shared ca certs ...
	I0203 12:04:41.308868   11844 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:04:41.308868   11844 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0203 12:04:41.310122   11844 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0203 12:04:41.310326   11844 certs.go:256] generating profile certs ...
	I0203 12:04:41.310647   11844 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\client.key
	I0203 12:04:41.310647   11844 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\client.crt with IP's: []
	I0203 12:04:41.471156   11844 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\client.crt ...
	I0203 12:04:41.471156   11844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\client.crt: {Name:mk1afff249a45610763ad9047043bfcfafd96b75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:04:41.472938   11844 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\client.key ...
	I0203 12:04:41.472938   11844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\client.key: {Name:mk2c59765f610682257cf053f5988f5d375d796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:04:41.473917   11844 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.key.195f0bfb
	I0203 12:04:41.473917   11844 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.crt.195f0bfb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.1.53]
	I0203 12:04:41.757735   11844 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.crt.195f0bfb ...
	I0203 12:04:41.757735   11844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.crt.195f0bfb: {Name:mk5040805a60d5812af4d633a57575ad1a61930e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:04:41.758506   11844 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.key.195f0bfb ...
	I0203 12:04:41.758506   11844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.key.195f0bfb: {Name:mk5abe3abd8321abfe3bd9cff1c55644e591cc26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:04:41.759584   11844 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.crt.195f0bfb -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.crt
	I0203 12:04:41.774366   11844 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.key.195f0bfb -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.key
	I0203 12:04:41.775149   11844 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\proxy-client.key
	I0203 12:04:41.775149   11844 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\proxy-client.crt with IP's: []
	I0203 12:04:41.914098   11844 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\proxy-client.crt ...
	I0203 12:04:41.914098   11844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\proxy-client.crt: {Name:mkd2ef002b4d536ac9bdec3e3e99357bf6df9852 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:04:41.914990   11844 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\proxy-client.key ...
	I0203 12:04:41.915986   11844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\proxy-client.key: {Name:mk8e059571336d1441e39ebc4c4839c2b88b53f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:04:41.916671   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0203 12:04:41.917000   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0203 12:04:41.917190   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0203 12:04:41.917312   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0203 12:04:41.917312   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0203 12:04:41.917312   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0203 12:04:41.917312   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0203 12:04:41.930477   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0203 12:04:41.930477   11844 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem (1338 bytes)
	W0203 12:04:41.930477   11844 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452_empty.pem, impossibly tiny 0 bytes
	I0203 12:04:41.930477   11844 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0203 12:04:41.930477   11844 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0203 12:04:41.931487   11844 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0203 12:04:41.931487   11844 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0203 12:04:41.931487   11844 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem (1708 bytes)
	I0203 12:04:41.931487   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:04:41.931487   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem -> /usr/share/ca-certificates/5452.pem
	I0203 12:04:41.931487   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /usr/share/ca-certificates/54522.pem
	I0203 12:04:41.932487   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 12:04:41.980119   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0203 12:04:42.024331   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 12:04:42.070111   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0203 12:04:42.112936   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0203 12:04:42.160136   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0203 12:04:42.204684   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 12:04:42.248049   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0203 12:04:42.288186   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 12:04:42.329559   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem --> /usr/share/ca-certificates/5452.pem (1338 bytes)
	I0203 12:04:42.384353   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /usr/share/ca-certificates/54522.pem (1708 bytes)
	I0203 12:04:42.438208   11844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 12:04:42.481419   11844 ssh_runner.go:195] Run: openssl version
	I0203 12:04:42.490984   11844 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0203 12:04:42.499738   11844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 12:04:42.528551   11844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:04:42.534922   11844 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb  3 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:04:42.535824   11844 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:04:42.543421   11844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:04:42.552418   11844 command_runner.go:130] > b5213941
	I0203 12:04:42.563096   11844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 12:04:42.594438   11844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5452.pem && ln -fs /usr/share/ca-certificates/5452.pem /etc/ssl/certs/5452.pem"
	I0203 12:04:42.626720   11844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5452.pem
	I0203 12:04:42.635001   11844 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb  3 10:45 /usr/share/ca-certificates/5452.pem
	I0203 12:04:42.635001   11844 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:45 /usr/share/ca-certificates/5452.pem
	I0203 12:04:42.643002   11844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5452.pem
	I0203 12:04:42.651619   11844 command_runner.go:130] > 51391683
	I0203 12:04:42.659913   11844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5452.pem /etc/ssl/certs/51391683.0"
	I0203 12:04:42.689124   11844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54522.pem && ln -fs /usr/share/ca-certificates/54522.pem /etc/ssl/certs/54522.pem"
	I0203 12:04:42.717552   11844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54522.pem
	I0203 12:04:42.724531   11844 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb  3 10:45 /usr/share/ca-certificates/54522.pem
	I0203 12:04:42.724531   11844 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:45 /usr/share/ca-certificates/54522.pem
	I0203 12:04:42.732258   11844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54522.pem
	I0203 12:04:42.740278   11844 command_runner.go:130] > 3ec20f2e
	I0203 12:04:42.749450   11844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/54522.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 12:04:42.776588   11844 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 12:04:42.782686   11844 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0203 12:04:42.783390   11844 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0203 12:04:42.783646   11844 kubeadm.go:392] StartCluster: {Name:multinode-749300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-749300 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.1.53 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 12:04:42.790141   11844 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0203 12:04:42.822067   11844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 12:04:42.840459   11844 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0203 12:04:42.840459   11844 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0203 12:04:42.840459   11844 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0203 12:04:42.849138   11844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 12:04:42.875574   11844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 12:04:42.895005   11844 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0203 12:04:42.895005   11844 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0203 12:04:42.895005   11844 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0203 12:04:42.895005   11844 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 12:04:42.896188   11844 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 12:04:42.896268   11844 kubeadm.go:157] found existing configuration files:
	
	I0203 12:04:42.903879   11844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 12:04:42.921505   11844 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 12:04:42.921505   11844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 12:04:42.930061   11844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 12:04:42.958356   11844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 12:04:42.974030   11844 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 12:04:42.974989   11844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 12:04:42.984145   11844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 12:04:43.007740   11844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 12:04:43.027227   11844 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 12:04:43.027367   11844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 12:04:43.036521   11844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 12:04:43.069680   11844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 12:04:43.086012   11844 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 12:04:43.086839   11844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 12:04:43.095506   11844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 12:04:43.113145   11844 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0203 12:04:43.460246   11844 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 12:04:43.460246   11844 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 12:04:55.978457   11844 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0203 12:04:55.978528   11844 command_runner.go:130] > [init] Using Kubernetes version: v1.32.1
	I0203 12:04:55.978699   11844 command_runner.go:130] > [preflight] Running pre-flight checks
	I0203 12:04:55.978699   11844 kubeadm.go:310] [preflight] Running pre-flight checks
	I0203 12:04:55.978920   11844 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 12:04:55.978920   11844 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 12:04:55.979286   11844 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 12:04:55.979286   11844 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 12:04:55.979596   11844 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0203 12:04:55.979669   11844 command_runner.go:130] > [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0203 12:04:55.979888   11844 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 12:04:55.979888   11844 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 12:04:55.982443   11844 out.go:235]   - Generating certificates and keys ...
	I0203 12:04:55.982676   11844 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0203 12:04:55.982727   11844 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0203 12:04:55.982961   11844 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0203 12:04:55.982961   11844 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0203 12:04:55.983108   11844 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0203 12:04:55.983108   11844 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0203 12:04:55.983108   11844 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0203 12:04:55.983108   11844 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0203 12:04:55.983108   11844 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0203 12:04:55.983108   11844 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0203 12:04:55.983642   11844 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0203 12:04:55.983797   11844 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0203 12:04:55.983797   11844 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0203 12:04:55.983797   11844 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0203 12:04:55.983797   11844 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-749300] and IPs [172.25.1.53 127.0.0.1 ::1]
	I0203 12:04:55.983797   11844 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-749300] and IPs [172.25.1.53 127.0.0.1 ::1]
	I0203 12:04:55.984323   11844 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0203 12:04:55.984323   11844 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0203 12:04:55.984508   11844 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-749300] and IPs [172.25.1.53 127.0.0.1 ::1]
	I0203 12:04:55.984508   11844 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-749300] and IPs [172.25.1.53 127.0.0.1 ::1]
	I0203 12:04:55.984508   11844 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0203 12:04:55.984508   11844 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0203 12:04:55.984508   11844 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0203 12:04:55.984508   11844 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0203 12:04:55.984508   11844 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0203 12:04:55.984508   11844 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0203 12:04:55.985058   11844 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 12:04:55.985160   11844 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 12:04:55.985243   11844 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 12:04:55.985319   11844 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 12:04:55.985455   11844 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0203 12:04:55.985455   11844 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0203 12:04:55.985592   11844 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 12:04:55.985651   11844 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 12:04:55.985722   11844 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 12:04:55.985722   11844 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 12:04:55.985868   11844 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 12:04:55.985868   11844 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 12:04:55.985868   11844 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 12:04:55.985868   11844 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 12:04:55.986107   11844 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 12:04:55.986107   11844 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 12:04:55.988803   11844 out.go:235]   - Booting up control plane ...
	I0203 12:04:55.988996   11844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 12:04:55.988996   11844 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 12:04:55.988996   11844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 12:04:55.988996   11844 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 12:04:55.988996   11844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 12:04:55.988996   11844 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 12:04:55.988996   11844 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 12:04:55.988996   11844 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 12:04:55.988996   11844 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 12:04:55.988996   11844 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 12:04:55.988996   11844 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0203 12:04:55.988996   11844 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0203 12:04:55.990006   11844 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0203 12:04:55.990092   11844 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0203 12:04:55.990390   11844 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0203 12:04:55.990390   11844 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0203 12:04:55.990390   11844 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.00132229s
	I0203 12:04:55.990390   11844 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00132229s
	I0203 12:04:55.990390   11844 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0203 12:04:55.990390   11844 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0203 12:04:55.990390   11844 command_runner.go:130] > [api-check] The API server is healthy after 6.50303594s
	I0203 12:04:55.990390   11844 kubeadm.go:310] [api-check] The API server is healthy after 6.50303594s
	I0203 12:04:55.991173   11844 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0203 12:04:55.991173   11844 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0203 12:04:55.991583   11844 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0203 12:04:55.991583   11844 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0203 12:04:55.991732   11844 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0203 12:04:55.991764   11844 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0203 12:04:55.992216   11844 kubeadm.go:310] [mark-control-plane] Marking the node multinode-749300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0203 12:04:55.992257   11844 command_runner.go:130] > [mark-control-plane] Marking the node multinode-749300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0203 12:04:55.992420   11844 command_runner.go:130] > [bootstrap-token] Using token: x8qjf2.3zsmuo8o1zh8jjmv
	I0203 12:04:55.992451   11844 kubeadm.go:310] [bootstrap-token] Using token: x8qjf2.3zsmuo8o1zh8jjmv
	I0203 12:04:55.995307   11844 out.go:235]   - Configuring RBAC rules ...
	I0203 12:04:55.995307   11844 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0203 12:04:55.995307   11844 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0203 12:04:55.995959   11844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0203 12:04:55.995998   11844 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0203 12:04:55.996326   11844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0203 12:04:55.996386   11844 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0203 12:04:55.996674   11844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0203 12:04:55.996746   11844 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0203 12:04:55.996979   11844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0203 12:04:55.996979   11844 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0203 12:04:55.996979   11844 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0203 12:04:55.996979   11844 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0203 12:04:55.996979   11844 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0203 12:04:55.997504   11844 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0203 12:04:55.997596   11844 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0203 12:04:55.997662   11844 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0203 12:04:55.997721   11844 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0203 12:04:55.997721   11844 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0203 12:04:55.997721   11844 kubeadm.go:310] 
	I0203 12:04:55.997721   11844 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0203 12:04:55.997922   11844 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0203 12:04:55.997966   11844 kubeadm.go:310] 
	I0203 12:04:55.997993   11844 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0203 12:04:55.997993   11844 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0203 12:04:55.997993   11844 kubeadm.go:310] 
	I0203 12:04:55.997993   11844 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0203 12:04:55.997993   11844 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0203 12:04:55.997993   11844 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0203 12:04:55.997993   11844 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0203 12:04:55.997993   11844 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0203 12:04:55.997993   11844 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0203 12:04:55.998605   11844 kubeadm.go:310] 
	I0203 12:04:55.998739   11844 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0203 12:04:55.998739   11844 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0203 12:04:55.998739   11844 kubeadm.go:310] 
	I0203 12:04:55.998782   11844 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0203 12:04:55.998782   11844 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0203 12:04:55.998782   11844 kubeadm.go:310] 
	I0203 12:04:55.998975   11844 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0203 12:04:55.999037   11844 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0203 12:04:55.999225   11844 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0203 12:04:55.999225   11844 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0203 12:04:55.999404   11844 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0203 12:04:55.999467   11844 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0203 12:04:55.999467   11844 kubeadm.go:310] 
	I0203 12:04:55.999706   11844 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0203 12:04:55.999706   11844 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0203 12:04:55.999917   11844 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0203 12:04:55.999917   11844 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0203 12:04:55.999917   11844 kubeadm.go:310] 
	I0203 12:04:56.000100   11844 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token x8qjf2.3zsmuo8o1zh8jjmv \
	I0203 12:04:56.000100   11844 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x8qjf2.3zsmuo8o1zh8jjmv \
	I0203 12:04:56.000401   11844 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce \
	I0203 12:04:56.000401   11844 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce \
	I0203 12:04:56.000458   11844 command_runner.go:130] > 	--control-plane 
	I0203 12:04:56.000518   11844 kubeadm.go:310] 	--control-plane 
	I0203 12:04:56.000518   11844 kubeadm.go:310] 
	I0203 12:04:56.000701   11844 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0203 12:04:56.000701   11844 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0203 12:04:56.000783   11844 kubeadm.go:310] 
	I0203 12:04:56.000914   11844 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token x8qjf2.3zsmuo8o1zh8jjmv \
	I0203 12:04:56.000914   11844 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x8qjf2.3zsmuo8o1zh8jjmv \
	I0203 12:04:56.001173   11844 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce 
	I0203 12:04:56.001173   11844 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce 
	I0203 12:04:56.001235   11844 cni.go:84] Creating CNI manager for ""
	I0203 12:04:56.001235   11844 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0203 12:04:56.004230   11844 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0203 12:04:56.013968   11844 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0203 12:04:56.021979   11844 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0203 12:04:56.021979   11844 command_runner.go:130] >   Size: 3103192   	Blocks: 6064       IO Block: 4096   regular file
	I0203 12:04:56.021979   11844 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0203 12:04:56.021979   11844 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0203 12:04:56.021979   11844 command_runner.go:130] > Access: 2025-02-03 12:03:09.760118600 +0000
	I0203 12:04:56.021979   11844 command_runner.go:130] > Modify: 2025-01-14 09:03:58.000000000 +0000
	I0203 12:04:56.021979   11844 command_runner.go:130] > Change: 2025-02-03 12:03:00.067000000 +0000
	I0203 12:04:56.021979   11844 command_runner.go:130] >  Birth: -
	I0203 12:04:56.021979   11844 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0203 12:04:56.021979   11844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0203 12:04:56.066531   11844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0203 12:04:56.724082   11844 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0203 12:04:56.724141   11844 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0203 12:04:56.724178   11844 command_runner.go:130] > serviceaccount/kindnet created
	I0203 12:04:56.724178   11844 command_runner.go:130] > daemonset.apps/kindnet created
	I0203 12:04:56.724218   11844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0203 12:04:56.734456   11844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-749300 minikube.k8s.io/updated_at=2025_02_03T12_04_56_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d minikube.k8s.io/name=multinode-749300 minikube.k8s.io/primary=true
	I0203 12:04:56.735496   11844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 12:04:56.753806   11844 command_runner.go:130] > -16
	I0203 12:04:56.753806   11844 ops.go:34] apiserver oom_adj: -16
	I0203 12:04:56.928145   11844 command_runner.go:130] > node/multinode-749300 labeled
	I0203 12:04:56.935043   11844 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0203 12:04:56.943422   11844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 12:04:57.063695   11844 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0203 12:04:57.444210   11844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 12:04:57.551256   11844 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0203 12:04:57.945288   11844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 12:04:58.059291   11844 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0203 12:04:58.445245   11844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 12:04:58.549580   11844 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0203 12:04:58.943884   11844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 12:04:59.041562   11844 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0203 12:04:59.443824   11844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 12:04:59.571450   11844 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0203 12:04:59.945827   11844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 12:05:00.066152   11844 command_runner.go:130] > NAME      SECRETS   AGE
	I0203 12:05:00.066152   11844 command_runner.go:130] > default   0         1s
	I0203 12:05:00.066152   11844 kubeadm.go:1113] duration metric: took 3.3418958s to wait for elevateKubeSystemPrivileges
	I0203 12:05:00.066152   11844 kubeadm.go:394] duration metric: took 17.2823106s to StartCluster
	I0203 12:05:00.066152   11844 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:05:00.066152   11844 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 12:05:00.068530   11844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:05:00.069818   11844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0203 12:05:00.070030   11844 start.go:235] Will wait 6m0s for node &{Name: IP:172.25.1.53 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 12:05:00.070030   11844 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0203 12:05:00.070204   11844 addons.go:69] Setting storage-provisioner=true in profile "multinode-749300"
	I0203 12:05:00.070393   11844 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:05:00.070204   11844 addons.go:69] Setting default-storageclass=true in profile "multinode-749300"
	I0203 12:05:00.070541   11844 addons.go:238] Setting addon storage-provisioner=true in "multinode-749300"
	I0203 12:05:00.070541   11844 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-749300"
	I0203 12:05:00.070641   11844 host.go:66] Checking if "multinode-749300" exists ...
	I0203 12:05:00.071512   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:05:00.071742   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:05:00.073418   11844 out.go:177] * Verifying Kubernetes components...
	I0203 12:05:00.089538   11844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:05:00.310597   11844 command_runner.go:130] > apiVersion: v1
	I0203 12:05:00.310597   11844 command_runner.go:130] > data:
	I0203 12:05:00.310597   11844 command_runner.go:130] >   Corefile: |
	I0203 12:05:00.310597   11844 command_runner.go:130] >     .:53 {
	I0203 12:05:00.310597   11844 command_runner.go:130] >         errors
	I0203 12:05:00.310597   11844 command_runner.go:130] >         health {
	I0203 12:05:00.310597   11844 command_runner.go:130] >            lameduck 5s
	I0203 12:05:00.310597   11844 command_runner.go:130] >         }
	I0203 12:05:00.310597   11844 command_runner.go:130] >         ready
	I0203 12:05:00.310597   11844 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0203 12:05:00.310597   11844 command_runner.go:130] >            pods insecure
	I0203 12:05:00.310597   11844 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0203 12:05:00.310597   11844 command_runner.go:130] >            ttl 30
	I0203 12:05:00.310597   11844 command_runner.go:130] >         }
	I0203 12:05:00.310597   11844 command_runner.go:130] >         prometheus :9153
	I0203 12:05:00.310597   11844 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0203 12:05:00.310597   11844 command_runner.go:130] >            max_concurrent 1000
	I0203 12:05:00.310597   11844 command_runner.go:130] >         }
	I0203 12:05:00.310597   11844 command_runner.go:130] >         cache 30 {
	I0203 12:05:00.310597   11844 command_runner.go:130] >            disable success cluster.local
	I0203 12:05:00.310597   11844 command_runner.go:130] >            disable denial cluster.local
	I0203 12:05:00.310597   11844 command_runner.go:130] >         }
	I0203 12:05:00.310597   11844 command_runner.go:130] >         loop
	I0203 12:05:00.310597   11844 command_runner.go:130] >         reload
	I0203 12:05:00.310597   11844 command_runner.go:130] >         loadbalance
	I0203 12:05:00.310597   11844 command_runner.go:130] >     }
	I0203 12:05:00.310597   11844 command_runner.go:130] > kind: ConfigMap
	I0203 12:05:00.310597   11844 command_runner.go:130] > metadata:
	I0203 12:05:00.310597   11844 command_runner.go:130] >   creationTimestamp: "2025-02-03T12:04:55Z"
	I0203 12:05:00.310597   11844 command_runner.go:130] >   name: coredns
	I0203 12:05:00.310597   11844 command_runner.go:130] >   namespace: kube-system
	I0203 12:05:00.310597   11844 command_runner.go:130] >   resourceVersion: "261"
	I0203 12:05:00.310597   11844 command_runner.go:130] >   uid: 8c919c60-0e25-42cc-bc90-d8b3f70106b0
	I0203 12:05:00.316936   11844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0203 12:05:00.419713   11844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 12:05:00.880412   11844 command_runner.go:130] > configmap/coredns replaced
	I0203 12:05:00.880568   11844 start.go:971] {"host.minikube.internal": 172.25.0.1} host record injected into CoreDNS's ConfigMap
	I0203 12:05:00.881943   11844 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 12:05:00.882274   11844 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 12:05:00.882567   11844 kapi.go:59] client config for multinode-749300: &rest.Config{Host:"https://172.25.1.53:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-749300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-749300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x219e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 12:05:00.883261   11844 kapi.go:59] client config for multinode-749300: &rest.Config{Host:"https://172.25.1.53:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-749300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-749300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x219e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 12:05:00.883801   11844 cert_rotation.go:140] Starting client certificate rotation controller
	I0203 12:05:00.884077   11844 node_ready.go:35] waiting up to 6m0s for node "multinode-749300" to be "Ready" ...
	I0203 12:05:00.884077   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:00.884077   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:00.884077   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:00.884077   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:00.884077   11844 round_trippers.go:463] GET https://172.25.1.53:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0203 12:05:00.884077   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:00.884077   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:00.884077   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:00.901807   11844 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0203 12:05:00.901894   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:00.901894   11844 round_trippers.go:580]     Audit-Id: 1fe2c574-1c51-4dd5-bbac-eec4900e361d
	I0203 12:05:00.901894   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:00.901894   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:00.901894   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:00.901967   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:00.901967   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:00 GMT
	I0203 12:05:00.901967   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:00.902704   11844 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0203 12:05:00.902704   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:00.902704   11844 round_trippers.go:580]     Audit-Id: d803b48f-34fa-4836-880c-910adde68c17
	I0203 12:05:00.902704   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:00.902704   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:00.902704   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:00.902704   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:00.902704   11844 round_trippers.go:580]     Content-Length: 291
	I0203 12:05:00.902704   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:00 GMT
	I0203 12:05:00.903245   11844 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"468b2221-02f0-432d-bcb6-1e7016f32d4a","resourceVersion":"380","creationTimestamp":"2025-02-03T12:04:55Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0203 12:05:00.904022   11844 request.go:1351] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"468b2221-02f0-432d-bcb6-1e7016f32d4a","resourceVersion":"380","creationTimestamp":"2025-02-03T12:04:55Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0203 12:05:00.904022   11844 round_trippers.go:463] PUT https://172.25.1.53:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0203 12:05:00.904022   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:00.904022   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:00.904022   11844 round_trippers.go:473]     Content-Type: application/json
	I0203 12:05:00.904022   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:00.917632   11844 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0203 12:05:00.917632   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:00.917632   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:00.917632   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:00.917632   11844 round_trippers.go:580]     Content-Length: 291
	I0203 12:05:00.917632   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:00 GMT
	I0203 12:05:00.917632   11844 round_trippers.go:580]     Audit-Id: 8e88d951-b0e4-405a-b413-b1c83976de95
	I0203 12:05:00.917632   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:00.917632   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:00.917632   11844 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"468b2221-02f0-432d-bcb6-1e7016f32d4a","resourceVersion":"382","creationTimestamp":"2025-02-03T12:04:55Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0203 12:05:01.384359   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:01.384359   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:01.384359   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:01.384359   11844 round_trippers.go:463] GET https://172.25.1.53:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0203 12:05:01.384359   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:01.384359   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:01.384359   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:01.384359   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:01.388479   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:01.388479   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:01.388479   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:01.388479   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:01.388479   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:01.388479   11844 round_trippers.go:580]     Audit-Id: d81faeab-ced1-4fec-a453-9e7f1dcc2ceb
	I0203 12:05:01.388479   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:01.388479   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:01.388479   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:01.388479   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:01 GMT
	I0203 12:05:01.388479   11844 round_trippers.go:580]     Audit-Id: 5fce66f2-9d07-4865-b112-041625e41e51
	I0203 12:05:01.388479   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:01.388479   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:01.388479   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:01.388479   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:01.388479   11844 round_trippers.go:580]     Content-Length: 291
	I0203 12:05:01.388479   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:01 GMT
	I0203 12:05:01.388479   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:01.389372   11844 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"468b2221-02f0-432d-bcb6-1e7016f32d4a","resourceVersion":"392","creationTimestamp":"2025-02-03T12:04:55Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0203 12:05:01.389582   11844 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-749300" context rescaled to 1 replicas
	I0203 12:05:01.885078   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:01.885078   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:01.885078   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:01.885078   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:01.900046   11844 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0203 12:05:01.900098   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:01.900098   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:01 GMT
	I0203 12:05:01.900098   11844 round_trippers.go:580]     Audit-Id: 4fe4958f-5232-4b64-921b-6d6aa035d34d
	I0203 12:05:01.900098   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:01.900098   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:01.900274   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:01.900343   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:01.900343   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:02.154616   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:05:02.154987   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:05:02.159029   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:05:02.159218   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:05:02.160247   11844 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 12:05:02.160878   11844 kapi.go:59] client config for multinode-749300: &rest.Config{Host:"https://172.25.1.53:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-749300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-749300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x219e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 12:05:02.161158   11844 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 12:05:02.161158   11844 addons.go:238] Setting addon default-storageclass=true in "multinode-749300"
	I0203 12:05:02.161866   11844 host.go:66] Checking if "multinode-749300" exists ...
	I0203 12:05:02.162549   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:05:02.163705   11844 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 12:05:02.163789   11844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0203 12:05:02.163822   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:05:02.384455   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:02.384455   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:02.384455   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:02.384455   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:02.388772   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:02.388772   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:02.388772   11844 round_trippers.go:580]     Audit-Id: 11728d31-fb60-46c0-8566-ec18aac8a142
	I0203 12:05:02.388772   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:02.388772   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:02.388772   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:02.388772   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:02.388772   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:02 GMT
	I0203 12:05:02.388772   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:02.884965   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:02.884965   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:02.884965   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:02.884965   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:02.890389   11844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:05:02.890389   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:02.890516   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:02.890516   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:02.890559   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:02 GMT
	I0203 12:05:02.890559   11844 round_trippers.go:580]     Audit-Id: 2792fe93-fca5-4543-b4e6-c7f020a3158f
	I0203 12:05:02.890559   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:02.890559   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:02.890737   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:02.891184   11844 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:05:03.384502   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:03.384502   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:03.384502   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:03.384502   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:03.389089   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:03.389089   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:03.389089   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:03.389089   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:03.389089   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:03.389089   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:03 GMT
	I0203 12:05:03.389089   11844 round_trippers.go:580]     Audit-Id: 6a0241a6-02ef-4574-b4a3-321252ddb724
	I0203 12:05:03.389089   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:03.389089   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:03.884689   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:03.884689   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:03.884689   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:03.884689   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:03.888550   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:05:03.888550   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:03.888550   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:03.888550   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:03 GMT
	I0203 12:05:03.888550   11844 round_trippers.go:580]     Audit-Id: d047975d-72a9-4835-8c61-7012958d5335
	I0203 12:05:03.888550   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:03.888550   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:03.888550   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:03.888907   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:04.284941   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:05:04.285027   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:05:04.285027   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:05:04.384755   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:04.384755   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:04.384755   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:04.384755   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:04.389742   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:04.389846   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:04.389846   11844 round_trippers.go:580]     Audit-Id: c312eefe-0371-4ee2-843b-ecc0a1259a5f
	I0203 12:05:04.389846   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:04.389846   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:04.389846   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:04.389945   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:04.389945   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:04 GMT
	I0203 12:05:04.390481   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:04.403274   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:05:04.403274   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:05:04.403519   11844 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0203 12:05:04.403581   11844 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0203 12:05:04.403645   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:05:04.885192   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:04.885192   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:04.885192   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:04.885192   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:04.889020   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:05:04.889020   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:04.889020   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:04.889020   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:04 GMT
	I0203 12:05:04.889196   11844 round_trippers.go:580]     Audit-Id: bdbf3fc3-ab1c-4165-97d0-fe483b6d2da0
	I0203 12:05:04.889196   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:04.889196   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:04.889196   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:04.889265   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:05.385173   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:05.385173   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:05.385173   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:05.385173   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:05.389462   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:05.389550   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:05.389550   11844 round_trippers.go:580]     Audit-Id: b8046305-b86d-4661-97e0-3e5a18f80e96
	I0203 12:05:05.389550   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:05.389550   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:05.389550   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:05.389550   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:05.389550   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:05 GMT
	I0203 12:05:05.389778   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:05.390302   11844 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:05:05.884961   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:05.885016   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:05.885016   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:05.885016   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:05.889649   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:05.889649   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:05.889649   11844 round_trippers.go:580]     Audit-Id: e35d7f33-598b-496f-a367-9e56a19ca2d2
	I0203 12:05:05.889649   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:05.889649   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:05.889649   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:05.889649   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:05.889649   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:05 GMT
	I0203 12:05:05.889649   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:06.385027   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:06.385027   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:06.385027   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:06.385027   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:06.388876   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:05:06.388876   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:06.388946   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:06.388946   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:06.388946   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:06.388946   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:06 GMT
	I0203 12:05:06.388946   11844 round_trippers.go:580]     Audit-Id: cc9a26e5-8aa6-4c63-a2c8-ca4abe039c08
	I0203 12:05:06.388946   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:06.389526   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:06.528803   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:05:06.528803   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:05:06.528803   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:05:06.757474   11844 main.go:141] libmachine: [stdout =====>] : 172.25.1.53
	
	I0203 12:05:06.757592   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:05:06.757592   11844 sshutil.go:53] new ssh client: &{IP:172.25.1.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\id_rsa Username:docker}
	I0203 12:05:06.885062   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:06.885062   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:06.885062   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:06.885062   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:06.888055   11844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:05:06.888055   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:06.888055   11844 round_trippers.go:580]     Audit-Id: 5a3dab73-a286-488c-8b48-a33f59358bec
	I0203 12:05:06.888055   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:06.888055   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:06.888055   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:06.888055   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:06.888055   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:06 GMT
	I0203 12:05:06.888055   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:06.914041   11844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 12:05:07.384864   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:07.384864   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:07.384864   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:07.384864   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:07.391032   11844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:05:07.391070   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:07.391070   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:07.391070   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:07 GMT
	I0203 12:05:07.391070   11844 round_trippers.go:580]     Audit-Id: 959a0a86-10ed-4814-9ea3-9a5d788e8232
	I0203 12:05:07.391070   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:07.391070   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:07.391070   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:07.391070   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:07.391797   11844 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:05:07.448893   11844 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0203 12:05:07.449042   11844 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0203 12:05:07.449042   11844 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0203 12:05:07.449138   11844 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0203 12:05:07.449185   11844 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0203 12:05:07.449185   11844 command_runner.go:130] > pod/storage-provisioner created
	I0203 12:05:07.884804   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:07.884804   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:07.884804   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:07.884804   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:07.889108   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:07.889222   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:07.889222   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:07.889222   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:07 GMT
	I0203 12:05:07.889222   11844 round_trippers.go:580]     Audit-Id: bac29cf5-cb49-4758-b69d-b8a698fa85ab
	I0203 12:05:07.889222   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:07.889222   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:07.889222   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:07.889327   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:08.384797   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:08.384797   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:08.384797   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:08.384797   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:08.389901   11844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:05:08.389901   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:08.389901   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:08.390002   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:08 GMT
	I0203 12:05:08.390002   11844 round_trippers.go:580]     Audit-Id: 423dca71-f2c0-4267-9ca9-2ec6272d143b
	I0203 12:05:08.390002   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:08.390002   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:08.390002   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:08.391219   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:08.815205   11844 main.go:141] libmachine: [stdout =====>] : 172.25.1.53
	
	I0203 12:05:08.815205   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:05:08.816222   11844 sshutil.go:53] new ssh client: &{IP:172.25.1.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\id_rsa Username:docker}
	I0203 12:05:08.884541   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:08.884541   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:08.884541   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:08.884541   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:08.888604   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:08.888604   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:08.888604   11844 round_trippers.go:580]     Audit-Id: 9fda3b12-447c-44e4-b789-347a6e9ee2f0
	I0203 12:05:08.888604   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:08.888604   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:08.888604   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:08.888604   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:08.888604   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:08 GMT
	I0203 12:05:08.889129   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:08.951159   11844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0203 12:05:09.120215   11844 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0203 12:05:09.120215   11844 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0203 12:05:09.120215   11844 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0203 12:05:09.120215   11844 round_trippers.go:463] GET https://172.25.1.53:8443/apis/storage.k8s.io/v1/storageclasses
	I0203 12:05:09.120741   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:09.120783   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:09.120783   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:09.123709   11844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:05:09.123709   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:09.123709   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:09.123709   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:09.123709   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:09.123709   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:09.123709   11844 round_trippers.go:580]     Content-Length: 1273
	I0203 12:05:09.123709   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:09 GMT
	I0203 12:05:09.123709   11844 round_trippers.go:580]     Audit-Id: 6290b1b0-8225-420b-8941-42b0a8f2dffb
	I0203 12:05:09.123709   11844 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"standard","uid":"c26e1616-39d0-4003-a24c-4cee02850f4d","resourceVersion":"420","creationTimestamp":"2025-02-03T12:05:09Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2025-02-03T12:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0203 12:05:09.124516   11844 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c26e1616-39d0-4003-a24c-4cee02850f4d","resourceVersion":"420","creationTimestamp":"2025-02-03T12:05:09Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2025-02-03T12:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0203 12:05:09.124597   11844 round_trippers.go:463] PUT https://172.25.1.53:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0203 12:05:09.124638   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:09.124700   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:09.124700   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:09.124700   11844 round_trippers.go:473]     Content-Type: application/json
	I0203 12:05:09.128677   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:05:09.128677   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:09.128677   11844 round_trippers.go:580]     Audit-Id: 6ac18e60-e9f4-4b96-87a6-0d349a062b32
	I0203 12:05:09.128677   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:09.128677   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:09.128677   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:09.128677   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:09.128677   11844 round_trippers.go:580]     Content-Length: 1220
	I0203 12:05:09.128677   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:09 GMT
	I0203 12:05:09.128677   11844 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c26e1616-39d0-4003-a24c-4cee02850f4d","resourceVersion":"420","creationTimestamp":"2025-02-03T12:05:09Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2025-02-03T12:05:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0203 12:05:09.132006   11844 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0203 12:05:09.135793   11844 addons.go:514] duration metric: took 9.0656915s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0203 12:05:09.384722   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:09.385114   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:09.385114   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:09.385114   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:09.388424   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:05:09.388489   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:09.388489   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:09 GMT
	I0203 12:05:09.388489   11844 round_trippers.go:580]     Audit-Id: b3324a26-47b7-4d88-b4ff-4d2b478cb949
	I0203 12:05:09.388489   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:09.388489   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:09.388489   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:09.388489   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:09.389439   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:09.884902   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:09.884902   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:09.884902   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:09.884902   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:09.888636   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:05:09.888636   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:09.888636   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:09 GMT
	I0203 12:05:09.888636   11844 round_trippers.go:580]     Audit-Id: 158fb99d-4d13-40ba-be70-b7f00174c5cd
	I0203 12:05:09.888636   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:09.888636   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:09.888636   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:09.888636   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:09.889162   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:09.889663   11844 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:05:10.384863   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:10.384863   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:10.384959   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:10.384959   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:10.388793   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:05:10.388951   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:10.388951   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:10 GMT
	I0203 12:05:10.388951   11844 round_trippers.go:580]     Audit-Id: d2924980-f1b1-4f0e-aa13-69d96f1736f7
	I0203 12:05:10.388951   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:10.388951   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:10.389068   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:10.389068   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:10.389353   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:10.884222   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:10.884222   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:10.884222   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:10.884222   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:10.887808   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:05:10.887808   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:10.887808   11844 round_trippers.go:580]     Audit-Id: 29ceafce-04c0-496b-99ae-aa88a31cb2bf
	I0203 12:05:10.887808   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:10.887808   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:10.888628   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:10.888628   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:10.888628   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:10 GMT
	I0203 12:05:10.889645   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:11.385388   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:11.385859   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:11.385859   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:11.385859   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:11.389203   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:05:11.389203   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:11.390143   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:11.390143   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:11.390143   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:11.390143   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:11 GMT
	I0203 12:05:11.390143   11844 round_trippers.go:580]     Audit-Id: 1868adf3-6cee-4b04-a414-1cb2d8201bff
	I0203 12:05:11.390143   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:11.390791   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:11.884680   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:11.884772   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:11.884772   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:11.884772   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:11.888044   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:05:11.888044   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:11.888044   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:11.888044   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:11.888044   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:11.888684   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:11 GMT
	I0203 12:05:11.888684   11844 round_trippers.go:580]     Audit-Id: 80cd9b44-f2ef-4c5c-acfa-2c53a29b817a
	I0203 12:05:11.888684   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:11.888977   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:12.384527   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:12.384988   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:12.384988   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:12.384988   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:12.388039   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:05:12.388039   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:12.388039   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:12.389026   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:12 GMT
	I0203 12:05:12.389026   11844 round_trippers.go:580]     Audit-Id: 6af04c9a-66ee-4e20-8ba8-b250bcb98761
	I0203 12:05:12.389026   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:12.389026   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:12.389026   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:12.389281   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:12.389704   11844 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:05:12.884348   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:12.884348   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:12.884348   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:12.884348   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:12.888312   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:05:12.888312   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:12.888312   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:12.888698   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:12.888698   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:12.888698   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:12 GMT
	I0203 12:05:12.888698   11844 round_trippers.go:580]     Audit-Id: 082219ab-711e-48bc-9802-04a6a23c19ef
	I0203 12:05:12.888698   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:12.889117   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:13.384453   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:13.384453   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:13.384453   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:13.384453   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:13.388423   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:05:13.388505   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:13.388505   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:13.388505   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:13.388505   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:13.388505   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:13.388571   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:13 GMT
	I0203 12:05:13.388571   11844 round_trippers.go:580]     Audit-Id: b155f422-c983-44f8-95b0-0c53d701d254
	I0203 12:05:13.389260   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:13.885017   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:13.885017   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:13.885094   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:13.885094   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:13.888463   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:05:13.889009   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:13.889009   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:13.889009   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:13.889009   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:13 GMT
	I0203 12:05:13.889009   11844 round_trippers.go:580]     Audit-Id: 7debd8d6-96f3-4ecc-b1bb-5db648892091
	I0203 12:05:13.889009   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:13.889009   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:13.889279   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:14.384643   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:14.385108   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:14.385108   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:14.385108   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:14.389305   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:14.389397   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:14.389397   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:14.389397   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:14.389397   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:14.389397   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:14 GMT
	I0203 12:05:14.389397   11844 round_trippers.go:580]     Audit-Id: 26f2b19d-2892-4cf9-b379-94da0099089b
	I0203 12:05:14.389397   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:14.389647   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:14.390123   11844 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:05:14.884412   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:14.884412   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:14.884412   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:14.884412   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:14.888332   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:05:14.888663   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:14.888663   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:14.888663   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:14.888663   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:14 GMT
	I0203 12:05:14.888663   11844 round_trippers.go:580]     Audit-Id: a7328281-6414-46a6-bdf8-19121bf57434
	I0203 12:05:14.888663   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:14.888663   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:14.889278   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:15.384390   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:15.384390   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:15.384390   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:15.384390   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:15.390106   11844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:05:15.390198   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:15.390198   11844 round_trippers.go:580]     Audit-Id: eb2bfd22-f272-4249-b8b2-30effe19db43
	I0203 12:05:15.390198   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:15.390198   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:15.390198   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:15.390198   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:15.390198   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:15 GMT
	I0203 12:05:15.390198   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:15.884760   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:15.884760   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:15.884760   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:15.884760   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:15.889311   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:15.889311   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:15.889311   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:15 GMT
	I0203 12:05:15.889311   11844 round_trippers.go:580]     Audit-Id: 9cd3bafd-77d0-4950-9b4a-e0364a824485
	I0203 12:05:15.889311   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:15.889311   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:15.889311   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:15.889311   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:15.889937   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:16.384524   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:16.384524   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:16.384524   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:16.384524   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:16.388848   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:16.388936   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:16.388936   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:16.388936   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:16.388936   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:16 GMT
	I0203 12:05:16.388936   11844 round_trippers.go:580]     Audit-Id: fa12d41b-46b1-4ed0-9941-ec4cc9fecdb2
	I0203 12:05:16.388936   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:16.389015   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:16.389400   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:16.884770   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:16.884770   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:16.884770   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:16.884770   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:16.889455   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:16.889532   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:16.889532   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:16.889532   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:16.889532   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:16.889532   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:16.889532   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:16 GMT
	I0203 12:05:16.889532   11844 round_trippers.go:580]     Audit-Id: 0baabd1d-1b89-4bdf-a884-4a85bb4e8617
	I0203 12:05:16.889735   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:16.890155   11844 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:05:17.384486   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:17.384486   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:17.384486   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:17.384486   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:17.388691   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:17.388805   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:17.388805   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:17.388805   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:17.388805   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:17.388805   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:17.388805   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:17 GMT
	I0203 12:05:17.388805   11844 round_trippers.go:580]     Audit-Id: 7b5c93b3-e841-4b2a-adab-685f63491637
	I0203 12:05:17.389469   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:17.884709   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:17.884709   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:17.884709   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:17.884709   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:17.888985   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:17.888985   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:17.889185   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:17.889185   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:17.889185   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:17 GMT
	I0203 12:05:17.889185   11844 round_trippers.go:580]     Audit-Id: ea9abfad-c69d-4f9e-9de1-cc7c8924601a
	I0203 12:05:17.889185   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:17.889185   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:17.889597   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:18.385066   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:18.385536   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:18.385536   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:18.385616   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:18.388869   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:05:18.389345   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:18.389345   11844 round_trippers.go:580]     Audit-Id: bfb3cac4-ad26-439a-b151-95d4e196759f
	I0203 12:05:18.389345   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:18.389345   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:18.389345   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:18.389420   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:18.389434   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:18 GMT
	I0203 12:05:18.390353   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:18.885342   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:18.885342   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:18.885342   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:18.885342   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:18.889534   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:18.889534   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:18.889534   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:18 GMT
	I0203 12:05:18.890136   11844 round_trippers.go:580]     Audit-Id: 75a216d5-e195-41ea-a9e1-063ab6af2b74
	I0203 12:05:18.890136   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:18.890136   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:18.890136   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:18.890136   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:18.890706   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:18.891244   11844 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:05:19.384676   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:19.385082   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:19.385082   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:19.385082   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:19.387911   11844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:05:19.388813   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:19.388813   11844 round_trippers.go:580]     Audit-Id: 8d41ccc5-9473-4385-bf50-e251268e87f8
	I0203 12:05:19.388813   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:19.388813   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:19.388813   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:19.388813   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:19.388813   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:19 GMT
	I0203 12:05:19.388893   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:19.885004   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:19.885004   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:19.885086   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:19.885086   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:19.889143   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:05:19.889143   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:19.889237   11844 round_trippers.go:580]     Audit-Id: 23d86fd3-f47b-48f0-b09c-8264801d02f3
	I0203 12:05:19.889237   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:19.889237   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:19.889237   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:19.889237   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:19.889237   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:19 GMT
	I0203 12:05:19.889697   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:20.384599   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:20.384599   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:20.384599   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:20.384599   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:20.388882   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:20.389354   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:20.389354   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:20.389354   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:20.389354   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:20 GMT
	I0203 12:05:20.389354   11844 round_trippers.go:580]     Audit-Id: 6cda033d-bca1-41c8-bcc0-be0490972b28
	I0203 12:05:20.389354   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:20.389354   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:20.389691   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:20.885070   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:20.885147   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:20.885147   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:20.885147   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:20.888503   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:05:20.889287   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:20.889287   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:20.889287   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:20.889287   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:20.889287   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:20 GMT
	I0203 12:05:20.889287   11844 round_trippers.go:580]     Audit-Id: 86decb5d-51a8-47df-8499-3f3223ae472e
	I0203 12:05:20.889287   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:20.889492   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:21.385528   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:21.385597   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:21.385597   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:21.385597   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:21.389573   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:05:21.389677   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:21.389677   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:21.389677   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:21.389677   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:21 GMT
	I0203 12:05:21.389677   11844 round_trippers.go:580]     Audit-Id: b13bcbb8-15e3-4bcd-98be-984bc41a21b1
	I0203 12:05:21.389677   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:21.389743   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:21.390258   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"335","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4925 chars]
	I0203 12:05:21.390748   11844 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:05:21.884416   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:21.884416   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:21.884416   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:21.884416   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:21.888828   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:21.888828   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:21.888828   11844 round_trippers.go:580]     Audit-Id: d1ada172-c997-4bc0-ae16-c28a65f81e79
	I0203 12:05:21.888909   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:21.888909   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:21.888909   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:21.888909   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:21.888909   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:21 GMT
	I0203 12:05:21.888974   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"426","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4780 chars]
	I0203 12:05:21.889708   11844 node_ready.go:49] node "multinode-749300" has status "Ready":"True"
	I0203 12:05:21.889766   11844 node_ready.go:38] duration metric: took 21.0053933s for node "multinode-749300" to be "Ready" ...
	I0203 12:05:21.889766   11844 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 12:05:21.889923   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods
	I0203 12:05:21.889923   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:21.889991   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:21.889991   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:21.909117   11844 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0203 12:05:21.909117   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:21.909117   11844 round_trippers.go:580]     Audit-Id: 17d301db-55c7-408a-96e1-5cf441d3d3ea
	I0203 12:05:21.909117   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:21.909202   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:21.909202   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:21.909202   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:21.909202   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:21 GMT
	I0203 12:05:21.910149   11844 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"431","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 58081 chars]
	I0203 12:05:21.914754   11844 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace to be "Ready" ...
	I0203 12:05:21.914928   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:05:21.914928   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:21.914928   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:21.914994   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:21.917720   11844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:05:21.918082   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:21.918082   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:21 GMT
	I0203 12:05:21.918082   11844 round_trippers.go:580]     Audit-Id: a465540b-2b3f-4bda-961f-e5a9c44edb0f
	I0203 12:05:21.918082   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:21.918082   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:21.918082   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:21.918082   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:21.918423   11844 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"431","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6699 chars]
	I0203 12:05:21.918578   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:21.918578   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:21.918578   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:21.918578   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:21.925707   11844 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 12:05:21.926241   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:21.926241   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:21.926241   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:21.926241   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:21.926241   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:21.926241   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:21 GMT
	I0203 12:05:21.926241   11844 round_trippers.go:580]     Audit-Id: 5ca96b3a-2cd3-49fb-9531-8bb8102dcf5a
	I0203 12:05:21.926582   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"426","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4780 chars]
	I0203 12:05:22.415211   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:05:22.415211   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:22.415211   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:22.415211   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:22.419343   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:22.419442   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:22.419442   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:22 GMT
	I0203 12:05:22.419442   11844 round_trippers.go:580]     Audit-Id: 896b4f99-a6f4-4ff1-9c79-3d4922caf123
	I0203 12:05:22.419442   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:22.419442   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:22.419442   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:22.419442   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:22.420277   11844 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"431","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6699 chars]
	I0203 12:05:22.421099   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:22.421099   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:22.421099   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:22.421099   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:22.423918   11844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:05:22.424152   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:22.424152   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:22.424152   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:22.424152   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:22.424152   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:22 GMT
	I0203 12:05:22.424152   11844 round_trippers.go:580]     Audit-Id: 9ec95c9d-38a9-484a-8814-62f3f588640c
	I0203 12:05:22.424152   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:22.424473   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"426","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4780 chars]
	I0203 12:05:22.918118   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:05:22.918194   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:22.918194   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:22.918194   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:22.925465   11844 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 12:05:22.925465   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:22.925465   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:22.925465   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:22 GMT
	I0203 12:05:22.925465   11844 round_trippers.go:580]     Audit-Id: 4636ac4b-4d14-40b9-96c9-87103c3527a4
	I0203 12:05:22.925465   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:22.925465   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:22.925465   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:22.925465   11844 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"431","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6699 chars]
	I0203 12:05:22.926462   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:22.926462   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:22.926462   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:22.926462   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:22.930860   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:22.930860   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:22.930860   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:22.930860   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:22 GMT
	I0203 12:05:22.930860   11844 round_trippers.go:580]     Audit-Id: 3fb3b90b-2c47-4f54-b377-809785a7cd57
	I0203 12:05:22.930860   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:22.930860   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:22.930860   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:22.930860   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"426","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4780 chars]
	I0203 12:05:23.415832   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:05:23.415832   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:23.415832   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:23.415832   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:23.418889   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:05:23.419857   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:23.419857   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:23.419857   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:23 GMT
	I0203 12:05:23.419857   11844 round_trippers.go:580]     Audit-Id: e78b5a9d-14e2-4c96-9554-3834dffd8408
	I0203 12:05:23.419857   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:23.419857   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:23.419857   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:23.419956   11844 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"444","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7059 chars]
	I0203 12:05:23.421250   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:23.421312   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:23.421312   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:23.421312   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:23.423549   11844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:05:23.423549   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:23.423549   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:23.423549   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:23.423549   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:23.423549   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:23 GMT
	I0203 12:05:23.423549   11844 round_trippers.go:580]     Audit-Id: 73c565d2-6936-4656-856e-24809deb2452
	I0203 12:05:23.424301   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:23.424579   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"426","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4780 chars]
	I0203 12:05:23.915073   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:05:23.915073   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:23.915073   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:23.915073   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:23.919437   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:23.919437   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:23.919437   11844 round_trippers.go:580]     Audit-Id: 7e56fc42-770d-477d-88c3-bbc45d546ca0
	I0203 12:05:23.919437   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:23.919437   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:23.919521   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:23.919521   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:23.919521   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:23 GMT
	I0203 12:05:23.919853   11844 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"444","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7059 chars]
	I0203 12:05:23.920733   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:23.920797   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:23.920797   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:23.920797   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:23.923449   11844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:05:23.923994   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:23.923994   11844 round_trippers.go:580]     Audit-Id: f0669093-3bdd-40fb-b0d4-767c320e2459
	I0203 12:05:23.923994   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:23.923994   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:23.923994   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:23.923994   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:23.923994   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:23 GMT
	I0203 12:05:23.924299   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"426","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4780 chars]
	I0203 12:05:23.924723   11844 pod_ready.go:103] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"False"
	I0203 12:05:24.415053   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:05:24.415053   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:24.415053   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:24.415053   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:24.419747   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:24.419818   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:24.419818   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:24.419818   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:24.419818   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:24.419818   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:24.419818   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:24 GMT
	I0203 12:05:24.419818   11844 round_trippers.go:580]     Audit-Id: 8848631b-2bab-4d63-b816-5f4e7ec64b7d
	I0203 12:05:24.419818   11844 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"447","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6830 chars]
	I0203 12:05:24.420840   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:24.420840   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:24.420840   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:24.420902   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:24.427297   11844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:05:24.427297   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:24.427297   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:24 GMT
	I0203 12:05:24.427297   11844 round_trippers.go:580]     Audit-Id: 360a5bb3-4252-4686-a8c1-86ac9bc880bd
	I0203 12:05:24.427297   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:24.427297   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:24.427297   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:24.427297   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:24.427297   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"426","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4780 chars]
	I0203 12:05:24.427297   11844 pod_ready.go:93] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"True"
	I0203 12:05:24.427297   11844 pod_ready.go:82] duration metric: took 2.5124331s for pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace to be "Ready" ...
	I0203 12:05:24.427297   11844 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:05:24.427297   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-749300
	I0203 12:05:24.427297   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:24.427297   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:24.427297   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:24.431333   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:24.431420   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:24.431420   11844 round_trippers.go:580]     Audit-Id: ac90cd19-fcac-478d-929b-e083756a03d0
	I0203 12:05:24.431420   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:24.431420   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:24.431420   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:24.431482   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:24.431482   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:24 GMT
	I0203 12:05:24.432137   11844 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-749300","namespace":"kube-system","uid":"c751851c-68ee-4c15-80ca-32642fcf2a5a","resourceVersion":"372","creationTimestamp":"2025-02-03T12:04:55Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.1.53:2379","kubernetes.io/config.hash":"cea8016677ee73c66077ce584fb15354","kubernetes.io/config.mirror":"cea8016677ee73c66077ce584fb15354","kubernetes.io/config.seen":"2025-02-03T12:04:55.455014244Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:04:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cli
ent-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.m [truncated 6443 chars]
	I0203 12:05:24.432751   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:24.432751   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:24.432751   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:24.432751   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:24.434780   11844 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0203 12:05:24.434780   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:24.434780   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:24.435491   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:24.435491   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:24.435491   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:24 GMT
	I0203 12:05:24.435491   11844 round_trippers.go:580]     Audit-Id: 56b6f939-e197-4c88-9ff4-aa7b9e58705a
	I0203 12:05:24.435491   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:24.435926   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"426","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4780 chars]
	I0203 12:05:24.436288   11844 pod_ready.go:93] pod "etcd-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:05:24.436352   11844 pod_ready.go:82] duration metric: took 9.0547ms for pod "etcd-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:05:24.436352   11844 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:05:24.436481   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-749300
	I0203 12:05:24.436481   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:24.436481   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:24.436538   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:24.438763   11844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:05:24.438763   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:24.438763   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:24 GMT
	I0203 12:05:24.438763   11844 round_trippers.go:580]     Audit-Id: 4118df10-3b50-4b64-b82b-879fc5e6e41b
	I0203 12:05:24.438763   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:24.438763   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:24.438763   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:24.438763   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:24.438763   11844 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-749300","namespace":"kube-system","uid":"b18ba461-b225-4090-8341-159171502b52","resourceVersion":"402","creationTimestamp":"2025-02-03T12:04:55Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.1.53:8443","kubernetes.io/config.hash":"a8703dd831250f30e213efd5fca131d7","kubernetes.io/config.mirror":"a8703dd831250f30e213efd5fca131d7","kubernetes.io/config.seen":"2025-02-03T12:04:55.455019045Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:04:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kuber
netes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.io [truncated 7674 chars]
	I0203 12:05:24.438763   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:24.438763   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:24.438763   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:24.438763   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:24.442442   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:05:24.442442   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:24.442524   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:24.442524   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:24.442524   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:24 GMT
	I0203 12:05:24.442524   11844 round_trippers.go:580]     Audit-Id: 2fe3094a-730b-4e73-bfaf-17196b0c7de5
	I0203 12:05:24.442524   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:24.442524   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:24.442606   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"426","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4780 chars]
	I0203 12:05:24.442606   11844 pod_ready.go:93] pod "kube-apiserver-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:05:24.442606   11844 pod_ready.go:82] duration metric: took 6.2539ms for pod "kube-apiserver-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:05:24.442606   11844 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:05:24.442606   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-749300
	I0203 12:05:24.442606   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:24.442606   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:24.442606   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:24.445210   11844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:05:24.445210   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:24.445210   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:24.445210   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:24.445210   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:24.445210   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:24.445210   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:24 GMT
	I0203 12:05:24.445210   11844 round_trippers.go:580]     Audit-Id: 27dcc04c-cd79-4691-b8d5-a2b800f37318
	I0203 12:05:24.446029   11844 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-749300","namespace":"kube-system","uid":"63c0818c-a0e6-40d1-b0c4-1cd633c91afb","resourceVersion":"405","creationTimestamp":"2025-02-03T12:04:55Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c25845f184856fc216b76acafcf34ee9","kubernetes.io/config.mirror":"c25845f184856fc216b76acafcf34ee9","kubernetes.io/config.seen":"2025-02-03T12:04:55.455020645Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:04:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7254 chars]
	I0203 12:05:24.446583   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:24.446675   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:24.446675   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:24.446675   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:24.448647   11844 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0203 12:05:24.448647   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:24.448647   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:24.448647   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:24 GMT
	I0203 12:05:24.448647   11844 round_trippers.go:580]     Audit-Id: 8aea3071-09a3-4a80-b2a9-e2f1ed19e9a6
	I0203 12:05:24.448647   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:24.448647   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:24.448647   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:24.449351   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"426","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4780 chars]
	I0203 12:05:24.449966   11844 pod_ready.go:93] pod "kube-controller-manager-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:05:24.450039   11844 pod_ready.go:82] duration metric: took 7.3604ms for pod "kube-controller-manager-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:05:24.450039   11844 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9g92t" in "kube-system" namespace to be "Ready" ...
	I0203 12:05:24.450109   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g92t
	I0203 12:05:24.450109   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:24.450109   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:24.450178   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:24.452657   11844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:05:24.452657   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:24.452657   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:24.452657   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:24.452657   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:24.452657   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:24.452657   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:24 GMT
	I0203 12:05:24.452657   11844 round_trippers.go:580]     Audit-Id: 0b3adb72-6e7f-412d-bc0b-89f6d89efc06
	I0203 12:05:24.453081   11844 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9g92t","generateName":"kube-proxy-","namespace":"kube-system","uid":"1709b874-4fee-41f5-8d30-24912b2fa725","resourceVersion":"400","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"04519c88-48ba-439f-bd57-a9c8b286d988","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04519c88-48ba-439f-bd57-a9c8b286d988\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6184 chars]
	I0203 12:05:24.454203   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:24.454256   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:24.454256   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:24.454256   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:24.457238   11844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:05:24.457238   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:24.457238   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:24.457238   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:24.457238   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:24.457238   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:24 GMT
	I0203 12:05:24.457238   11844 round_trippers.go:580]     Audit-Id: a957c38b-f70b-438f-bfc3-39e15cd6cdee
	I0203 12:05:24.457238   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:24.457238   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"426","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4780 chars]
	I0203 12:05:24.457238   11844 pod_ready.go:93] pod "kube-proxy-9g92t" in "kube-system" namespace has status "Ready":"True"
	I0203 12:05:24.457238   11844 pod_ready.go:82] duration metric: took 7.1998ms for pod "kube-proxy-9g92t" in "kube-system" namespace to be "Ready" ...
	I0203 12:05:24.457238   11844 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:05:24.616220   11844 request.go:632] Waited for 158.9801ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-749300
	I0203 12:05:24.616220   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-749300
	I0203 12:05:24.616220   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:24.616220   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:24.616220   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:24.620762   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:24.621013   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:24.621013   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:24.621013   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:24 GMT
	I0203 12:05:24.621013   11844 round_trippers.go:580]     Audit-Id: c2640bbe-5f68-4b57-bfd2-6951bc105341
	I0203 12:05:24.621013   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:24.621013   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:24.621013   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:24.621999   11844 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-749300","namespace":"kube-system","uid":"8e4c1052-9dca-466d-833b-eff318b977d7","resourceVersion":"328","creationTimestamp":"2025-02-03T12:04:55Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a4dc8a8db691940bb17375ec22c0921e","kubernetes.io/config.mirror":"a4dc8a8db691940bb17375ec22c0921e","kubernetes.io/config.seen":"2025-02-03T12:04:55.455022345Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:04:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5347 chars]
	I0203 12:05:24.816389   11844 request.go:632] Waited for 193.694ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:24.816389   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:05:24.816389   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:24.816389   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:24.816389   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:24.820830   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:24.820830   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:24.820830   11844 round_trippers.go:580]     Audit-Id: 0a23a6d2-580d-40d7-ab27-447f0bc4b565
	I0203 12:05:24.820830   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:24.820830   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:24.820830   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:24.820830   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:24.820830   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:24 GMT
	I0203 12:05:24.821158   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"426","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4780 chars]
	I0203 12:05:24.821662   11844 pod_ready.go:93] pod "kube-scheduler-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:05:24.821726   11844 pod_ready.go:82] duration metric: took 364.4196ms for pod "kube-scheduler-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:05:24.821726   11844 pod_ready.go:39] duration metric: took 2.9319272s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 12:05:24.821726   11844 api_server.go:52] waiting for apiserver process to appear ...
	I0203 12:05:24.830468   11844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 12:05:24.855878   11844 command_runner.go:130] > 2102
	I0203 12:05:24.855961   11844 api_server.go:72] duration metric: took 24.7854771s to wait for apiserver process to appear ...
	I0203 12:05:24.855961   11844 api_server.go:88] waiting for apiserver healthz status ...
	I0203 12:05:24.856041   11844 api_server.go:253] Checking apiserver healthz at https://172.25.1.53:8443/healthz ...
	I0203 12:05:24.863353   11844 api_server.go:279] https://172.25.1.53:8443/healthz returned 200:
	ok
	I0203 12:05:24.863353   11844 round_trippers.go:463] GET https://172.25.1.53:8443/version
	I0203 12:05:24.863353   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:24.863353   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:24.864227   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:24.865263   11844 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0203 12:05:24.865872   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:24.865872   11844 round_trippers.go:580]     Audit-Id: bca23bfd-897b-44a8-9972-0f0a48c56383
	I0203 12:05:24.865872   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:24.865872   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:24.865872   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:24.865872   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:24.865872   11844 round_trippers.go:580]     Content-Length: 263
	I0203 12:05:24.865872   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:24 GMT
	I0203 12:05:24.865951   11844 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "32",
	  "gitVersion": "v1.32.1",
	  "gitCommit": "e9c9be4007d1664e68796af02b8978640d2c1b26",
	  "gitTreeState": "clean",
	  "buildDate": "2025-01-15T14:31:55Z",
	  "goVersion": "go1.23.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0203 12:05:24.866036   11844 api_server.go:141] control plane version: v1.32.1
	I0203 12:05:24.866112   11844 api_server.go:131] duration metric: took 10.1507ms to wait for apiserver health ...
	I0203 12:05:24.866112   11844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 12:05:25.015752   11844 request.go:632] Waited for 149.5663ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods
	I0203 12:05:25.015752   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods
	I0203 12:05:25.015752   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:25.015752   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:25.015752   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:25.021538   11844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:05:25.021761   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:25.021761   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:25.021761   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:25.021761   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:25.021761   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:25.021761   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:25 GMT
	I0203 12:05:25.021761   11844 round_trippers.go:580]     Audit-Id: bd672531-905f-4ce1-9a8f-89d4b6d72fd2
	I0203 12:05:25.023124   11844 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"447","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 58195 chars]
	I0203 12:05:25.025656   11844 system_pods.go:59] 8 kube-system pods found
	I0203 12:05:25.025727   11844 system_pods.go:61] "coredns-668d6bf9bc-v2gkp" [c94a77a3-456e-41d7-b9ad-7aa97e0264a7] Running
	I0203 12:05:25.025727   11844 system_pods.go:61] "etcd-multinode-749300" [c751851c-68ee-4c15-80ca-32642fcf2a5a] Running
	I0203 12:05:25.025727   11844 system_pods.go:61] "kindnet-h6m57" [67c155d5-fb9b-42f5-8e64-865c44a5d4e6] Running
	I0203 12:05:25.025727   11844 system_pods.go:61] "kube-apiserver-multinode-749300" [b18ba461-b225-4090-8341-159171502b52] Running
	I0203 12:05:25.025727   11844 system_pods.go:61] "kube-controller-manager-multinode-749300" [63c0818c-a0e6-40d1-b0c4-1cd633c91afb] Running
	I0203 12:05:25.025727   11844 system_pods.go:61] "kube-proxy-9g92t" [1709b874-4fee-41f5-8d30-24912b2fa725] Running
	I0203 12:05:25.025727   11844 system_pods.go:61] "kube-scheduler-multinode-749300" [8e4c1052-9dca-466d-833b-eff318b977d7] Running
	I0203 12:05:25.025727   11844 system_pods.go:61] "storage-provisioner" [4c991afa-7bb0-4d52-bded-22d68037b5ae] Running
	I0203 12:05:25.025727   11844 system_pods.go:74] duration metric: took 159.6134ms to wait for pod list to return data ...
	I0203 12:05:25.025727   11844 default_sa.go:34] waiting for default service account to be created ...
	I0203 12:05:25.215708   11844 request.go:632] Waited for 189.8143ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.1.53:8443/api/v1/namespaces/default/serviceaccounts
	I0203 12:05:25.215708   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/default/serviceaccounts
	I0203 12:05:25.215708   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:25.215708   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:25.215708   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:25.221386   11844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:05:25.221386   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:25.221508   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:25.221508   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:25.221508   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:25.221508   11844 round_trippers.go:580]     Content-Length: 261
	I0203 12:05:25.221508   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:25 GMT
	I0203 12:05:25.221508   11844 round_trippers.go:580]     Audit-Id: db926e64-7ac3-4157-a904-552c7d934af1
	I0203 12:05:25.221508   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:25.221585   11844 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6fd4ae1e-3802-4893-86a4-85da162d717d","resourceVersion":"329","creationTimestamp":"2025-02-03T12:04:59Z"}}]}
	I0203 12:05:25.221807   11844 default_sa.go:45] found service account: "default"
	I0203 12:05:25.221890   11844 default_sa.go:55] duration metric: took 196.1612ms for default service account to be created ...
	I0203 12:05:25.221890   11844 system_pods.go:116] waiting for k8s-apps to be running ...
	I0203 12:05:25.415392   11844 request.go:632] Waited for 193.3737ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods
	I0203 12:05:25.415392   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods
	I0203 12:05:25.415392   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:25.415392   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:25.415392   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:25.419739   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:25.419739   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:25.420290   11844 round_trippers.go:580]     Audit-Id: 8c7c054e-6b21-489f-9420-1f94026ffad4
	I0203 12:05:25.420290   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:25.420290   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:25.420290   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:25.420290   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:25.420290   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:25 GMT
	I0203 12:05:25.421433   11844 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"447","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 58195 chars]
	I0203 12:05:25.424077   11844 system_pods.go:86] 8 kube-system pods found
	I0203 12:05:25.424147   11844 system_pods.go:89] "coredns-668d6bf9bc-v2gkp" [c94a77a3-456e-41d7-b9ad-7aa97e0264a7] Running
	I0203 12:05:25.424147   11844 system_pods.go:89] "etcd-multinode-749300" [c751851c-68ee-4c15-80ca-32642fcf2a5a] Running
	I0203 12:05:25.424147   11844 system_pods.go:89] "kindnet-h6m57" [67c155d5-fb9b-42f5-8e64-865c44a5d4e6] Running
	I0203 12:05:25.424147   11844 system_pods.go:89] "kube-apiserver-multinode-749300" [b18ba461-b225-4090-8341-159171502b52] Running
	I0203 12:05:25.424147   11844 system_pods.go:89] "kube-controller-manager-multinode-749300" [63c0818c-a0e6-40d1-b0c4-1cd633c91afb] Running
	I0203 12:05:25.424147   11844 system_pods.go:89] "kube-proxy-9g92t" [1709b874-4fee-41f5-8d30-24912b2fa725] Running
	I0203 12:05:25.424218   11844 system_pods.go:89] "kube-scheduler-multinode-749300" [8e4c1052-9dca-466d-833b-eff318b977d7] Running
	I0203 12:05:25.424218   11844 system_pods.go:89] "storage-provisioner" [4c991afa-7bb0-4d52-bded-22d68037b5ae] Running
	I0203 12:05:25.424218   11844 system_pods.go:126] duration metric: took 202.3255ms to wait for k8s-apps to be running ...
	I0203 12:05:25.424218   11844 system_svc.go:44] waiting for kubelet service to be running ....
	I0203 12:05:25.431330   11844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 12:05:25.459477   11844 system_svc.go:56] duration metric: took 35.2584ms WaitForService to wait for kubelet
	I0203 12:05:25.459477   11844 kubeadm.go:582] duration metric: took 25.3889863s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 12:05:25.459477   11844 node_conditions.go:102] verifying NodePressure condition ...
	I0203 12:05:25.616078   11844 request.go:632] Waited for 155.5872ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.1.53:8443/api/v1/nodes
	I0203 12:05:25.616078   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes
	I0203 12:05:25.616078   11844 round_trippers.go:469] Request Headers:
	I0203 12:05:25.616078   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:05:25.616078   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:05:25.620301   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:05:25.621055   11844 round_trippers.go:577] Response Headers:
	I0203 12:05:25.621055   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:05:25.621055   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:05:25.621055   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:05:25.621055   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:05:25 GMT
	I0203 12:05:25.621055   11844 round_trippers.go:580]     Audit-Id: 6ea22732-26f6-4ab4-b2d8-e5ee256a1f39
	I0203 12:05:25.621055   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:05:25.621438   11844 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"426","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4833 chars]
	I0203 12:05:25.621946   11844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 12:05:25.622094   11844 node_conditions.go:123] node cpu capacity is 2
	I0203 12:05:25.622094   11844 node_conditions.go:105] duration metric: took 162.6156ms to run NodePressure ...
	I0203 12:05:25.622094   11844 start.go:241] waiting for startup goroutines ...
	I0203 12:05:25.622208   11844 start.go:246] waiting for cluster config update ...
	I0203 12:05:25.622208   11844 start.go:255] writing updated cluster config ...
	I0203 12:05:25.628961   11844 out.go:201] 
	I0203 12:05:25.632189   11844 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:05:25.639950   11844 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:05:25.640604   11844 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\config.json ...
	I0203 12:05:25.645303   11844 out.go:177] * Starting "multinode-749300-m02" worker node in "multinode-749300" cluster
	I0203 12:05:25.647591   11844 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 12:05:25.647591   11844 cache.go:56] Caching tarball of preloaded images
	I0203 12:05:25.648682   11844 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 12:05:25.648815   11844 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0203 12:05:25.648847   11844 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\config.json ...
	I0203 12:05:25.654638   11844 start.go:360] acquireMachinesLock for multinode-749300-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 12:05:25.655638   11844 start.go:364] duration metric: took 1.0002ms to acquireMachinesLock for "multinode-749300-m02"
	I0203 12:05:25.655911   11844 start.go:93] Provisioning new machine with config: &{Name:multinode-749300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-749300
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.1.53 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C
:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0203 12:05:25.656000   11844 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0203 12:05:25.661743   11844 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0203 12:05:25.661743   11844 start.go:159] libmachine.API.Create for "multinode-749300" (driver="hyperv")
	I0203 12:05:25.661743   11844 client.go:168] LocalClient.Create starting
	I0203 12:05:25.662340   11844 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0203 12:05:25.662340   11844 main.go:141] libmachine: Decoding PEM data...
	I0203 12:05:25.662340   11844 main.go:141] libmachine: Parsing certificate...
	I0203 12:05:25.662340   11844 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0203 12:05:25.662859   11844 main.go:141] libmachine: Decoding PEM data...
	I0203 12:05:25.662941   11844 main.go:141] libmachine: Parsing certificate...
	I0203 12:05:25.663011   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0203 12:05:27.440433   11844 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0203 12:05:27.440485   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:05:27.440485   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0203 12:05:29.056962   11844 main.go:141] libmachine: [stdout =====>] : False
	
	I0203 12:05:29.057037   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:05:29.057037   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0203 12:05:30.458394   11844 main.go:141] libmachine: [stdout =====>] : True
	
	I0203 12:05:30.458680   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:05:30.458772   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0203 12:05:33.807094   11844 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0203 12:05:33.807431   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:05:33.809914   11844 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0203 12:05:34.255796   11844 main.go:141] libmachine: Creating SSH key...
	I0203 12:05:34.469875   11844 main.go:141] libmachine: Creating VM...
	I0203 12:05:34.469875   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0203 12:05:37.123298   11844 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0203 12:05:37.123298   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:05:37.123789   11844 main.go:141] libmachine: Using switch "Default Switch"
	I0203 12:05:37.123947   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0203 12:05:38.745750   11844 main.go:141] libmachine: [stdout =====>] : True
	
	I0203 12:05:38.745928   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:05:38.745928   11844 main.go:141] libmachine: Creating VHD
	I0203 12:05:38.746045   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0203 12:05:42.346491   11844 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 7EA8BAAF-BD01-47A2-9EB6-B25B4422306E
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0203 12:05:42.346491   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:05:42.346491   11844 main.go:141] libmachine: Writing magic tar header
	I0203 12:05:42.346491   11844 main.go:141] libmachine: Writing SSH key tar header
	I0203 12:05:42.359818   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0203 12:05:45.429739   11844 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:05:45.430322   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:05:45.430425   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02\disk.vhd' -SizeBytes 20000MB
	I0203 12:05:47.860082   11844 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:05:47.860530   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:05:47.860606   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-749300-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0203 12:05:51.226174   11844 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-749300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0203 12:05:51.226174   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:05:51.226174   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-749300-m02 -DynamicMemoryEnabled $false
	I0203 12:05:53.340631   11844 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:05:53.341400   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:05:53.341400   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-749300-m02 -Count 2
	I0203 12:05:55.374304   11844 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:05:55.374545   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:05:55.374602   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-749300-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02\boot2docker.iso'
	I0203 12:05:57.853617   11844 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:05:57.853862   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:05:57.853862   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-749300-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02\disk.vhd'
	I0203 12:06:00.372772   11844 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:06:00.373141   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:00.373141   11844 main.go:141] libmachine: Starting VM...
	I0203 12:06:00.373141   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-749300-m02
	I0203 12:06:03.266524   11844 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:06:03.266524   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:03.267448   11844 main.go:141] libmachine: Waiting for host to start...
	I0203 12:06:03.267448   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:06:05.381098   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:06:05.381791   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:05.381853   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:06:07.753493   11844 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:06:07.753899   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:08.754615   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:06:10.805820   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:06:10.805820   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:10.806487   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:06:13.110541   11844 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:06:13.110541   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:14.110740   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:06:16.180247   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:06:16.180247   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:16.180247   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:06:18.551784   11844 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:06:18.551784   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:19.551861   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:06:21.597444   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:06:21.597444   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:21.597444   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:06:23.938422   11844 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:06:23.939110   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:24.939597   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:06:27.013891   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:06:27.013988   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:27.014100   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:06:29.614590   11844 main.go:141] libmachine: [stdout =====>] : 172.25.8.35
	
	I0203 12:06:29.614590   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:29.614918   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:06:31.582525   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:06:31.582525   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:31.582634   11844 machine.go:93] provisionDockerMachine start ...
	I0203 12:06:31.582748   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:06:33.601995   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:06:33.601995   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:33.601995   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:06:36.013593   11844 main.go:141] libmachine: [stdout =====>] : 172.25.8.35
	
	I0203 12:06:36.013906   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:36.017908   11844 main.go:141] libmachine: Using SSH client type: native
	I0203 12:06:36.030933   11844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.8.35 22 <nil> <nil>}
	I0203 12:06:36.030933   11844 main.go:141] libmachine: About to run SSH command:
	hostname
	I0203 12:06:36.162287   11844 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0203 12:06:36.162388   11844 buildroot.go:166] provisioning hostname "multinode-749300-m02"
	I0203 12:06:36.162488   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:06:38.148589   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:06:38.149427   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:38.149513   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:06:40.526257   11844 main.go:141] libmachine: [stdout =====>] : 172.25.8.35
	
	I0203 12:06:40.526257   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:40.530348   11844 main.go:141] libmachine: Using SSH client type: native
	I0203 12:06:40.530551   11844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.8.35 22 <nil> <nil>}
	I0203 12:06:40.530551   11844 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-749300-m02 && echo "multinode-749300-m02" | sudo tee /etc/hostname
	I0203 12:06:40.690485   11844 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-749300-m02
	
	I0203 12:06:40.690625   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:06:42.671305   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:06:42.671305   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:42.671305   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:06:45.037724   11844 main.go:141] libmachine: [stdout =====>] : 172.25.8.35
	
	I0203 12:06:45.038504   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:45.042801   11844 main.go:141] libmachine: Using SSH client type: native
	I0203 12:06:45.043258   11844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.8.35 22 <nil> <nil>}
	I0203 12:06:45.043258   11844 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-749300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-749300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-749300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 12:06:45.191759   11844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 12:06:45.191759   11844 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0203 12:06:45.191759   11844 buildroot.go:174] setting up certificates
	I0203 12:06:45.191759   11844 provision.go:84] configureAuth start
	I0203 12:06:45.191759   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:06:47.214918   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:06:47.214918   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:47.215015   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:06:49.617936   11844 main.go:141] libmachine: [stdout =====>] : 172.25.8.35
	
	I0203 12:06:49.617936   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:49.617936   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:06:51.616570   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:06:51.616570   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:51.616866   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:06:54.015197   11844 main.go:141] libmachine: [stdout =====>] : 172.25.8.35
	
	I0203 12:06:54.015197   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:54.015755   11844 provision.go:143] copyHostCerts
	I0203 12:06:54.015868   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0203 12:06:54.016113   11844 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0203 12:06:54.016113   11844 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0203 12:06:54.016384   11844 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0203 12:06:54.017277   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0203 12:06:54.017375   11844 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0203 12:06:54.017470   11844 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0203 12:06:54.017670   11844 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0203 12:06:54.018459   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0203 12:06:54.018655   11844 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0203 12:06:54.018655   11844 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0203 12:06:54.018899   11844 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0203 12:06:54.019603   11844 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-749300-m02 san=[127.0.0.1 172.25.8.35 localhost minikube multinode-749300-m02]
	I0203 12:06:54.137325   11844 provision.go:177] copyRemoteCerts
	I0203 12:06:54.146206   11844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 12:06:54.146286   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:06:56.148060   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:06:56.148060   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:56.148161   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:06:58.506578   11844 main.go:141] libmachine: [stdout =====>] : 172.25.8.35
	
	I0203 12:06:58.506655   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:06:58.506879   11844 sshutil.go:53] new ssh client: &{IP:172.25.8.35 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02\id_rsa Username:docker}
	I0203 12:06:58.614602   11844 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4682882s)
	I0203 12:06:58.614676   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0203 12:06:58.614676   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0203 12:06:58.660943   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0203 12:06:58.661706   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0203 12:06:58.705570   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0203 12:06:58.705570   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0203 12:06:58.751570   11844 provision.go:87] duration metric: took 13.5596587s to configureAuth
	I0203 12:06:58.751570   11844 buildroot.go:189] setting minikube options for container-runtime
	I0203 12:06:58.752196   11844 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:06:58.752196   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:07:00.753427   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:07:00.753463   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:00.753536   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:07:03.158014   11844 main.go:141] libmachine: [stdout =====>] : 172.25.8.35
	
	I0203 12:07:03.158014   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:03.162120   11844 main.go:141] libmachine: Using SSH client type: native
	I0203 12:07:03.162120   11844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.8.35 22 <nil> <nil>}
	I0203 12:07:03.162120   11844 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 12:07:03.296722   11844 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0203 12:07:03.296722   11844 buildroot.go:70] root file system type: tmpfs
	I0203 12:07:03.296967   11844 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 12:07:03.296967   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:07:05.285580   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:07:05.285975   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:05.286053   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:07:07.648003   11844 main.go:141] libmachine: [stdout =====>] : 172.25.8.35
	
	I0203 12:07:07.648003   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:07.651930   11844 main.go:141] libmachine: Using SSH client type: native
	I0203 12:07:07.652002   11844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.8.35 22 <nil> <nil>}
	I0203 12:07:07.652002   11844 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.1.53"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 12:07:07.807620   11844 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.1.53
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 12:07:07.807620   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:07:09.784446   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:07:09.784446   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:09.784605   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:07:12.142526   11844 main.go:141] libmachine: [stdout =====>] : 172.25.8.35
	
	I0203 12:07:12.142526   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:12.148557   11844 main.go:141] libmachine: Using SSH client type: native
	I0203 12:07:12.151930   11844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.8.35 22 <nil> <nil>}
	I0203 12:07:12.151930   11844 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 12:07:14.443427   11844 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0203 12:07:14.443427   11844 machine.go:96] duration metric: took 42.8603127s to provisionDockerMachine
	I0203 12:07:14.443427   11844 client.go:171] duration metric: took 1m48.7804644s to LocalClient.Create
	I0203 12:07:14.443427   11844 start.go:167] duration metric: took 1m48.7804644s to libmachine.API.Create "multinode-749300"
	I0203 12:07:14.443427   11844 start.go:293] postStartSetup for "multinode-749300-m02" (driver="hyperv")
	I0203 12:07:14.443427   11844 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 12:07:14.451839   11844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 12:07:14.451839   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:07:16.465314   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:07:16.465759   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:16.465836   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:07:18.833096   11844 main.go:141] libmachine: [stdout =====>] : 172.25.8.35
	
	I0203 12:07:18.833183   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:18.833248   11844 sshutil.go:53] new ssh client: &{IP:172.25.8.35 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02\id_rsa Username:docker}
	I0203 12:07:18.933015   11844 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4811259s)
	I0203 12:07:18.941297   11844 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 12:07:18.948551   11844 command_runner.go:130] > NAME=Buildroot
	I0203 12:07:18.948551   11844 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0203 12:07:18.948551   11844 command_runner.go:130] > ID=buildroot
	I0203 12:07:18.948551   11844 command_runner.go:130] > VERSION_ID=2023.02.9
	I0203 12:07:18.948551   11844 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0203 12:07:18.948686   11844 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 12:07:18.948686   11844 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0203 12:07:18.949050   11844 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0203 12:07:18.949642   11844 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> 54522.pem in /etc/ssl/certs
	I0203 12:07:18.949704   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /etc/ssl/certs/54522.pem
	I0203 12:07:18.957590   11844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 12:07:18.977220   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /etc/ssl/certs/54522.pem (1708 bytes)
	I0203 12:07:19.022769   11844 start.go:296] duration metric: took 4.5791913s for postStartSetup
	I0203 12:07:19.026163   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:07:21.036220   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:07:21.036220   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:21.036484   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:07:23.431467   11844 main.go:141] libmachine: [stdout =====>] : 172.25.8.35
	
	I0203 12:07:23.431990   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:23.432228   11844 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\config.json ...
	I0203 12:07:23.434534   11844 start.go:128] duration metric: took 1m57.7772132s to createHost
	I0203 12:07:23.434696   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:07:25.426236   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:07:25.426236   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:25.427271   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:07:27.794227   11844 main.go:141] libmachine: [stdout =====>] : 172.25.8.35
	
	I0203 12:07:27.794227   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:27.798041   11844 main.go:141] libmachine: Using SSH client type: native
	I0203 12:07:27.798250   11844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.8.35 22 <nil> <nil>}
	I0203 12:07:27.798250   11844 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0203 12:07:27.929903   11844 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738584447.944786092
	
	I0203 12:07:27.929903   11844 fix.go:216] guest clock: 1738584447.944786092
	I0203 12:07:27.929903   11844 fix.go:229] Guest: 2025-02-03 12:07:27.944786092 +0000 UTC Remote: 2025-02-03 12:07:23.4346179 +0000 UTC m=+322.459417801 (delta=4.510168192s)
	I0203 12:07:27.929903   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:07:29.893205   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:07:29.893257   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:29.893257   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:07:32.275627   11844 main.go:141] libmachine: [stdout =====>] : 172.25.8.35
	
	I0203 12:07:32.275627   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:32.280185   11844 main.go:141] libmachine: Using SSH client type: native
	I0203 12:07:32.280400   11844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.8.35 22 <nil> <nil>}
	I0203 12:07:32.280400   11844 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1738584447
	I0203 12:07:32.418718   11844 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb  3 12:07:27 UTC 2025
	
	I0203 12:07:32.418718   11844 fix.go:236] clock set: Mon Feb  3 12:07:27 UTC 2025
	 (err=<nil>)
	I0203 12:07:32.418718   11844 start.go:83] releasing machines lock for "multinode-749300-m02", held for 2m6.7616596s
	I0203 12:07:32.420247   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:07:34.414938   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:07:34.415033   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:34.415033   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:07:36.776910   11844 main.go:141] libmachine: [stdout =====>] : 172.25.8.35
	
	I0203 12:07:36.777793   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:36.780081   11844 out.go:177] * Found network options:
	I0203 12:07:36.782808   11844 out.go:177]   - NO_PROXY=172.25.1.53
	W0203 12:07:36.785149   11844 proxy.go:119] fail to check proxy env: Error ip not in block
	I0203 12:07:36.787404   11844 out.go:177]   - NO_PROXY=172.25.1.53
	W0203 12:07:36.790220   11844 proxy.go:119] fail to check proxy env: Error ip not in block
	W0203 12:07:36.791199   11844 proxy.go:119] fail to check proxy env: Error ip not in block
	I0203 12:07:36.793657   11844 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0203 12:07:36.793657   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:07:36.800095   11844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0203 12:07:36.800095   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:07:38.802621   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:07:38.802684   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:38.802745   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:07:38.822569   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:07:38.822569   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:38.822569   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:07:41.203129   11844 main.go:141] libmachine: [stdout =====>] : 172.25.8.35
	
	I0203 12:07:41.203129   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:41.203129   11844 sshutil.go:53] new ssh client: &{IP:172.25.8.35 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02\id_rsa Username:docker}
	I0203 12:07:41.229788   11844 main.go:141] libmachine: [stdout =====>] : 172.25.8.35
	
	I0203 12:07:41.230175   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:41.230175   11844 sshutil.go:53] new ssh client: &{IP:172.25.8.35 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02\id_rsa Username:docker}
	I0203 12:07:41.298206   11844 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0203 12:07:41.298337   11844 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.4981914s)
	W0203 12:07:41.298337   11844 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 12:07:41.307298   11844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 12:07:41.311742   11844 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0203 12:07:41.312820   11844 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.519054s)
	W0203 12:07:41.312820   11844 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0203 12:07:41.341490   11844 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0203 12:07:41.341557   11844 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0203 12:07:41.341666   11844 start.go:495] detecting cgroup driver to use...
	I0203 12:07:41.341871   11844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 12:07:41.374632   11844 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0203 12:07:41.383317   11844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0203 12:07:41.412362   11844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0203 12:07:41.432187   11844 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 12:07:41.444481   11844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W0203 12:07:41.471478   11844 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0203 12:07:41.471478   11844 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0203 12:07:41.475475   11844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 12:07:41.504335   11844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 12:07:41.531269   11844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 12:07:41.557369   11844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 12:07:41.583767   11844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 12:07:41.611913   11844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0203 12:07:41.641397   11844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0203 12:07:41.667580   11844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 12:07:41.684630   11844 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 12:07:41.685524   11844 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 12:07:41.694682   11844 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0203 12:07:41.723234   11844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 12:07:41.753648   11844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:07:41.950694   11844 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 12:07:41.984008   11844 start.go:495] detecting cgroup driver to use...
	I0203 12:07:41.992880   11844 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 12:07:42.016937   11844 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0203 12:07:42.016937   11844 command_runner.go:130] > [Unit]
	I0203 12:07:42.016937   11844 command_runner.go:130] > Description=Docker Application Container Engine
	I0203 12:07:42.016937   11844 command_runner.go:130] > Documentation=https://docs.docker.com
	I0203 12:07:42.016937   11844 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0203 12:07:42.016937   11844 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0203 12:07:42.016937   11844 command_runner.go:130] > StartLimitBurst=3
	I0203 12:07:42.016937   11844 command_runner.go:130] > StartLimitIntervalSec=60
	I0203 12:07:42.016937   11844 command_runner.go:130] > [Service]
	I0203 12:07:42.016937   11844 command_runner.go:130] > Type=notify
	I0203 12:07:42.016937   11844 command_runner.go:130] > Restart=on-failure
	I0203 12:07:42.016937   11844 command_runner.go:130] > Environment=NO_PROXY=172.25.1.53
	I0203 12:07:42.016937   11844 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0203 12:07:42.016937   11844 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0203 12:07:42.016937   11844 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0203 12:07:42.016937   11844 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0203 12:07:42.016937   11844 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0203 12:07:42.016937   11844 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0203 12:07:42.016937   11844 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0203 12:07:42.016937   11844 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0203 12:07:42.016937   11844 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0203 12:07:42.016937   11844 command_runner.go:130] > ExecStart=
	I0203 12:07:42.016937   11844 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0203 12:07:42.016937   11844 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0203 12:07:42.016937   11844 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0203 12:07:42.016937   11844 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0203 12:07:42.016937   11844 command_runner.go:130] > LimitNOFILE=infinity
	I0203 12:07:42.016937   11844 command_runner.go:130] > LimitNPROC=infinity
	I0203 12:07:42.016937   11844 command_runner.go:130] > LimitCORE=infinity
	I0203 12:07:42.016937   11844 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0203 12:07:42.016937   11844 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0203 12:07:42.016937   11844 command_runner.go:130] > TasksMax=infinity
	I0203 12:07:42.016937   11844 command_runner.go:130] > TimeoutStartSec=0
	I0203 12:07:42.016937   11844 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0203 12:07:42.016937   11844 command_runner.go:130] > Delegate=yes
	I0203 12:07:42.016937   11844 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0203 12:07:42.016937   11844 command_runner.go:130] > KillMode=process
	I0203 12:07:42.016937   11844 command_runner.go:130] > [Install]
	I0203 12:07:42.016937   11844 command_runner.go:130] > WantedBy=multi-user.target
	I0203 12:07:42.025807   11844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 12:07:42.056684   11844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 12:07:42.091767   11844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 12:07:42.124903   11844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 12:07:42.160340   11844 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0203 12:07:42.222162   11844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 12:07:42.247682   11844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 12:07:42.287839   11844 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0203 12:07:42.296355   11844 ssh_runner.go:195] Run: which cri-dockerd
	I0203 12:07:42.302970   11844 command_runner.go:130] > /usr/bin/cri-dockerd
	I0203 12:07:42.310981   11844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0203 12:07:42.329224   11844 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0203 12:07:42.370104   11844 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 12:07:42.566637   11844 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 12:07:42.760845   11844 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 12:07:42.760961   11844 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0203 12:07:42.806358   11844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:07:43.018121   11844 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 12:07:45.612336   11844 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.594186s)
	I0203 12:07:45.622168   11844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0203 12:07:45.653450   11844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 12:07:45.687020   11844 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0203 12:07:45.877791   11844 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 12:07:46.073326   11844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:07:46.268314   11844 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0203 12:07:46.306291   11844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 12:07:46.340290   11844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:07:46.549369   11844 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0203 12:07:46.667137   11844 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0203 12:07:46.675143   11844 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0203 12:07:46.683931   11844 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0203 12:07:46.683931   11844 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0203 12:07:46.683931   11844 command_runner.go:130] > Device: 0,22	Inode: 881         Links: 1
	I0203 12:07:46.683931   11844 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0203 12:07:46.683931   11844 command_runner.go:130] > Access: 2025-02-03 12:07:46.592869337 +0000
	I0203 12:07:46.683931   11844 command_runner.go:130] > Modify: 2025-02-03 12:07:46.592869337 +0000
	I0203 12:07:46.683931   11844 command_runner.go:130] > Change: 2025-02-03 12:07:46.596869345 +0000
	I0203 12:07:46.683931   11844 command_runner.go:130] >  Birth: -
	I0203 12:07:46.683931   11844 start.go:563] Will wait 60s for crictl version
	I0203 12:07:46.693304   11844 ssh_runner.go:195] Run: which crictl
	I0203 12:07:46.699492   11844 command_runner.go:130] > /usr/bin/crictl
	I0203 12:07:46.707284   11844 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 12:07:46.756766   11844 command_runner.go:130] > Version:  0.1.0
	I0203 12:07:46.756766   11844 command_runner.go:130] > RuntimeName:  docker
	I0203 12:07:46.756766   11844 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0203 12:07:46.756766   11844 command_runner.go:130] > RuntimeApiVersion:  v1
	I0203 12:07:46.758911   11844 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0203 12:07:46.764737   11844 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 12:07:46.802048   11844 command_runner.go:130] > 27.4.0
	I0203 12:07:46.808043   11844 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 12:07:46.840075   11844 command_runner.go:130] > 27.4.0
	I0203 12:07:46.844835   11844 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0203 12:07:46.847340   11844 out.go:177]   - env NO_PROXY=172.25.1.53
	I0203 12:07:46.849790   11844 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0203 12:07:46.853516   11844 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0203 12:07:46.853516   11844 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0203 12:07:46.853516   11844 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0203 12:07:46.853516   11844 ip.go:211] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:37:32:ac Flags:up|broadcast|multicast|running}
	I0203 12:07:46.856081   11844 ip.go:214] interface addr: fe80::c77d:5c4b:3bd9:9577/64
	I0203 12:07:46.856081   11844 ip.go:214] interface addr: 172.25.0.1/20
	I0203 12:07:46.863081   11844 ssh_runner.go:195] Run: grep 172.25.0.1	host.minikube.internal$ /etc/hosts
	I0203 12:07:46.869368   11844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 12:07:46.895792   11844 mustload.go:65] Loading cluster: multinode-749300
	I0203 12:07:46.896315   11844 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:07:46.896436   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:07:48.867690   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:07:48.867690   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:48.867690   11844 host.go:66] Checking if "multinode-749300" exists ...
	I0203 12:07:48.868311   11844 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300 for IP: 172.25.8.35
	I0203 12:07:48.868311   11844 certs.go:194] generating shared ca certs ...
	I0203 12:07:48.868311   11844 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:07:48.868833   11844 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0203 12:07:48.869115   11844 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0203 12:07:48.869115   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0203 12:07:48.869680   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0203 12:07:48.869825   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0203 12:07:48.869825   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0203 12:07:48.869825   11844 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem (1338 bytes)
	W0203 12:07:48.870447   11844 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452_empty.pem, impossibly tiny 0 bytes
	I0203 12:07:48.870516   11844 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0203 12:07:48.870736   11844 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0203 12:07:48.870851   11844 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0203 12:07:48.871068   11844 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0203 12:07:48.871491   11844 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem (1708 bytes)
	I0203 12:07:48.871593   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /usr/share/ca-certificates/54522.pem
	I0203 12:07:48.871593   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:07:48.871593   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem -> /usr/share/ca-certificates/5452.pem
	I0203 12:07:48.871593   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 12:07:48.919082   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0203 12:07:48.964349   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 12:07:49.009158   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0203 12:07:49.055423   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /usr/share/ca-certificates/54522.pem (1708 bytes)
	I0203 12:07:49.102440   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 12:07:49.148092   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem --> /usr/share/ca-certificates/5452.pem (1338 bytes)
	I0203 12:07:49.201082   11844 ssh_runner.go:195] Run: openssl version
	I0203 12:07:49.209865   11844 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0203 12:07:49.217967   11844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54522.pem && ln -fs /usr/share/ca-certificates/54522.pem /etc/ssl/certs/54522.pem"
	I0203 12:07:49.246005   11844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54522.pem
	I0203 12:07:49.253016   11844 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb  3 10:45 /usr/share/ca-certificates/54522.pem
	I0203 12:07:49.253590   11844 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:45 /usr/share/ca-certificates/54522.pem
	I0203 12:07:49.262357   11844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54522.pem
	I0203 12:07:49.271324   11844 command_runner.go:130] > 3ec20f2e
	I0203 12:07:49.279366   11844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/54522.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 12:07:49.306090   11844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 12:07:49.334605   11844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:07:49.341826   11844 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb  3 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:07:49.341826   11844 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:07:49.349704   11844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:07:49.357960   11844 command_runner.go:130] > b5213941
	I0203 12:07:49.365811   11844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 12:07:49.397374   11844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5452.pem && ln -fs /usr/share/ca-certificates/5452.pem /etc/ssl/certs/5452.pem"
	I0203 12:07:49.429373   11844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5452.pem
	I0203 12:07:49.437021   11844 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb  3 10:45 /usr/share/ca-certificates/5452.pem
	I0203 12:07:49.437021   11844 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:45 /usr/share/ca-certificates/5452.pem
	I0203 12:07:49.444939   11844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5452.pem
	I0203 12:07:49.453168   11844 command_runner.go:130] > 51391683
	I0203 12:07:49.461409   11844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5452.pem /etc/ssl/certs/51391683.0"
	I0203 12:07:49.491179   11844 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 12:07:49.497435   11844 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0203 12:07:49.497435   11844 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0203 12:07:49.497435   11844 kubeadm.go:934] updating node {m02 172.25.8.35 8443 v1.32.1 docker false true} ...
	I0203 12:07:49.498071   11844 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-749300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.8.35
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:multinode-749300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0203 12:07:49.505810   11844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0203 12:07:49.524487   11844 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.32.1': No such file or directory
	I0203 12:07:49.524802   11844 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.1': No such file or directory
	
	Initiating transfer...
	I0203 12:07:49.535784   11844 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.1
	I0203 12:07:49.553817   11844 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm.sha256
	I0203 12:07:49.553817   11844 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
	I0203 12:07:49.553817   11844 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet.sha256
	I0203 12:07:49.553817   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm -> /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0203 12:07:49.553817   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl -> /var/lib/minikube/binaries/v1.32.1/kubectl
	I0203 12:07:49.563785   11844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0203 12:07:49.564785   11844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 12:07:49.564785   11844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl
	I0203 12:07:49.570962   11844 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubeadm': No such file or directory
	I0203 12:07:49.571025   11844 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubeadm': No such file or directory
	I0203 12:07:49.571166   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm --> /var/lib/minikube/binaries/v1.32.1/kubeadm (70942872 bytes)
	I0203 12:07:49.604556   11844 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet -> /var/lib/minikube/binaries/v1.32.1/kubelet
	I0203 12:07:49.604556   11844 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubectl': No such file or directory
	I0203 12:07:49.604625   11844 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubectl': No such file or directory
	I0203 12:07:49.604801   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl --> /var/lib/minikube/binaries/v1.32.1/kubectl (57323672 bytes)
	I0203 12:07:49.612501   11844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet
	I0203 12:07:49.661718   11844 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubelet': No such file or directory
	I0203 12:07:49.667425   11844 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubelet': No such file or directory
	I0203 12:07:49.667622   11844 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet --> /var/lib/minikube/binaries/v1.32.1/kubelet (77398276 bytes)
	I0203 12:07:50.661179   11844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0203 12:07:50.681727   11844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0203 12:07:50.714115   11844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 12:07:50.756480   11844 ssh_runner.go:195] Run: grep 172.25.1.53	control-plane.minikube.internal$ /etc/hosts
	I0203 12:07:50.763480   11844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.1.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 12:07:50.794977   11844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:07:51.000027   11844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 12:07:51.029045   11844 host.go:66] Checking if "multinode-749300" exists ...
	I0203 12:07:51.029529   11844 start.go:317] joinCluster: &{Name:multinode-749300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-749300 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.1.53 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.8.35 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenki
ns.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 12:07:51.029529   11844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0203 12:07:51.029529   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:07:53.013773   11844 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:07:53.014622   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:53.014622   11844 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:07:55.351039   11844 main.go:141] libmachine: [stdout =====>] : 172.25.1.53
	
	I0203 12:07:55.351841   11844 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:07:55.352234   11844 sshutil.go:53] new ssh client: &{IP:172.25.1.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\id_rsa Username:docker}
	I0203 12:07:55.551852   11844 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token iqjwgp.6w37p4iuen1kdxne --discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce 
	I0203 12:07:55.551852   11844 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.5222721s)
	I0203 12:07:55.551852   11844 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.25.8.35 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0203 12:07:55.551852   11844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token iqjwgp.6w37p4iuen1kdxne --discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-749300-m02"
	I0203 12:07:55.738529   11844 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 12:07:57.565075   11844 command_runner.go:130] > [preflight] Running pre-flight checks
	I0203 12:07:57.565192   11844 command_runner.go:130] > [preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
	I0203 12:07:57.565192   11844 command_runner.go:130] > [preflight] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
	I0203 12:07:57.565192   11844 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 12:07:57.565192   11844 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 12:07:57.565192   11844 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0203 12:07:57.565192   11844 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0203 12:07:57.565192   11844 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.005657228s
	I0203 12:07:57.565192   11844 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0203 12:07:57.565333   11844 command_runner.go:130] > This node has joined the cluster:
	I0203 12:07:57.565333   11844 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0203 12:07:57.565333   11844 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0203 12:07:57.565333   11844 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0203 12:07:57.565333   11844 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token iqjwgp.6w37p4iuen1kdxne --discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-749300-m02": (2.0134578s)
	I0203 12:07:57.565437   11844 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0203 12:07:57.793984   11844 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0203 12:07:57.996492   11844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-749300-m02 minikube.k8s.io/updated_at=2025_02_03T12_07_57_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d minikube.k8s.io/name=multinode-749300 minikube.k8s.io/primary=false
	I0203 12:07:58.149680   11844 command_runner.go:130] > node/multinode-749300-m02 labeled
	I0203 12:07:58.149986   11844 start.go:319] duration metric: took 7.1203772s to joinCluster
	I0203 12:07:58.150202   11844 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.25.8.35 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0203 12:07:58.150826   11844 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:07:58.154197   11844 out.go:177] * Verifying Kubernetes components...
	I0203 12:07:58.167236   11844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:07:58.358203   11844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 12:07:58.384756   11844 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 12:07:58.386224   11844 kapi.go:59] client config for multinode-749300: &rest.Config{Host:"https://172.25.1.53:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-749300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-749300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x219e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 12:07:58.387677   11844 node_ready.go:35] waiting up to 6m0s for node "multinode-749300-m02" to be "Ready" ...
	I0203 12:07:58.387891   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:07:58.387891   11844 round_trippers.go:469] Request Headers:
	I0203 12:07:58.387891   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:07:58.387971   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:07:58.400891   11844 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0203 12:07:58.400891   11844 round_trippers.go:577] Response Headers:
	I0203 12:07:58.400891   11844 round_trippers.go:580]     Audit-Id: a0a69961-b1dc-4304-9188-956ed8254eba
	I0203 12:07:58.400891   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:07:58.400984   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:07:58.400984   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:07:58.400984   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:07:58.400984   11844 round_trippers.go:580]     Content-Length: 3918
	I0203 12:07:58.400984   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:07:58 GMT
	I0203 12:07:58.401056   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"603","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2894 chars]
	I0203 12:07:58.887767   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:07:58.887767   11844 round_trippers.go:469] Request Headers:
	I0203 12:07:58.887767   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:07:58.887767   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:07:58.891835   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:07:58.891835   11844 round_trippers.go:577] Response Headers:
	I0203 12:07:58.891835   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:07:58.891835   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:07:58.891835   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:07:58.891835   11844 round_trippers.go:580]     Content-Length: 3918
	I0203 12:07:58.892295   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:07:58 GMT
	I0203 12:07:58.892295   11844 round_trippers.go:580]     Audit-Id: 1d87fbc7-a214-4b33-90c3-62afe284a131
	I0203 12:07:58.892295   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:07:58.892413   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"603","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2894 chars]
	I0203 12:07:59.387943   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:07:59.387943   11844 round_trippers.go:469] Request Headers:
	I0203 12:07:59.387943   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:07:59.387943   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:07:59.392104   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:07:59.392344   11844 round_trippers.go:577] Response Headers:
	I0203 12:07:59.392344   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:07:59.392344   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:07:59.392344   11844 round_trippers.go:580]     Content-Length: 3918
	I0203 12:07:59.392344   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:07:59 GMT
	I0203 12:07:59.392344   11844 round_trippers.go:580]     Audit-Id: 333b6692-f2a7-4a08-8005-6c1d707ad3cc
	I0203 12:07:59.392344   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:07:59.392344   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:07:59.392559   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"603","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2894 chars]
	I0203 12:07:59.888667   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:07:59.889223   11844 round_trippers.go:469] Request Headers:
	I0203 12:07:59.889223   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:07:59.889223   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:07:59.893441   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:07:59.893441   11844 round_trippers.go:577] Response Headers:
	I0203 12:07:59.893441   11844 round_trippers.go:580]     Audit-Id: 3cc20f06-8439-4a02-902d-9d93314cb305
	I0203 12:07:59.893441   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:07:59.893441   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:07:59.893441   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:07:59.893441   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:07:59.893441   11844 round_trippers.go:580]     Content-Length: 4027
	I0203 12:07:59.893441   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:07:59 GMT
	I0203 12:07:59.893617   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"608","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3003 chars]
	I0203 12:08:00.388484   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:00.388921   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:00.388996   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:00.388996   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:00.394420   11844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:08:00.394494   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:00.394494   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:00.394494   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:00.394494   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:00.394494   11844 round_trippers.go:580]     Content-Length: 4027
	I0203 12:08:00.394494   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:00 GMT
	I0203 12:08:00.394494   11844 round_trippers.go:580]     Audit-Id: c6243e7d-bf77-4e33-8554-dc93234ba199
	I0203 12:08:00.394494   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:00.394700   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"608","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3003 chars]
	I0203 12:08:00.394855   11844 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:08:00.889144   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:00.889144   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:00.889240   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:00.889240   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:00.892397   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:00.892564   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:00.892564   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:00 GMT
	I0203 12:08:00.892564   11844 round_trippers.go:580]     Audit-Id: 8e8888a5-fb00-4f71-9534-9d0cbe2cda51
	I0203 12:08:00.892564   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:00.892564   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:00.892564   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:00.892564   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:00.892564   11844 round_trippers.go:580]     Content-Length: 4027
	I0203 12:08:00.892724   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"608","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3003 chars]
	I0203 12:08:01.388599   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:01.388599   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:01.388599   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:01.388599   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:01.393670   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:01.393670   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:01.393670   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:01.393750   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:01.393750   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:01.393750   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:01.393750   11844 round_trippers.go:580]     Content-Length: 4027
	I0203 12:08:01.393750   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:01 GMT
	I0203 12:08:01.393750   11844 round_trippers.go:580]     Audit-Id: 5dc0ccbe-7cb1-4304-bb05-1e6e1ff3deb5
	I0203 12:08:01.393815   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"608","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3003 chars]
	I0203 12:08:01.888796   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:01.888796   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:01.888796   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:01.888796   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:01.894314   11844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:08:01.894314   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:01.894314   11844 round_trippers.go:580]     Content-Length: 4027
	I0203 12:08:01.894314   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:01 GMT
	I0203 12:08:01.894314   11844 round_trippers.go:580]     Audit-Id: ece63bcb-617f-4bcd-a00b-7d116b14d75d
	I0203 12:08:01.894314   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:01.894314   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:01.894314   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:01.894314   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:01.894314   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"608","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3003 chars]
	I0203 12:08:02.388165   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:02.388165   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:02.388165   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:02.388165   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:02.392707   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:02.392851   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:02.392851   11844 round_trippers.go:580]     Content-Length: 4027
	I0203 12:08:02.392851   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:02 GMT
	I0203 12:08:02.392851   11844 round_trippers.go:580]     Audit-Id: de628f30-e61e-4613-8648-d9fc2ac9991d
	I0203 12:08:02.392946   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:02.392976   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:02.392976   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:02.392976   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:02.393414   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"608","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3003 chars]
	I0203 12:08:02.888259   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:02.888259   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:02.888259   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:02.888259   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:02.892452   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:02.892452   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:02.892452   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:02.892452   11844 round_trippers.go:580]     Content-Length: 4027
	I0203 12:08:02.892452   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:02 GMT
	I0203 12:08:02.892540   11844 round_trippers.go:580]     Audit-Id: d0334fde-dfa4-431d-a301-15065d7a6407
	I0203 12:08:02.892540   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:02.892540   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:02.892540   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:02.892587   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"608","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3003 chars]
	I0203 12:08:02.892587   11844 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:08:03.387851   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:03.387851   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:03.387851   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:03.387851   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:03.392073   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:03.392176   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:03.392176   11844 round_trippers.go:580]     Audit-Id: 5650fc52-d2d9-49b1-8ff1-4a3ca5e5ed37
	I0203 12:08:03.392176   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:03.392176   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:03.392176   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:03.392176   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:03.392176   11844 round_trippers.go:580]     Content-Length: 4027
	I0203 12:08:03.392176   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:03 GMT
	I0203 12:08:03.392364   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"608","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3003 chars]
	I0203 12:08:03.888375   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:03.888375   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:03.888375   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:03.888375   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:03.891894   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:03.891894   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:03.891894   11844 round_trippers.go:580]     Content-Length: 4027
	I0203 12:08:03.892007   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:03 GMT
	I0203 12:08:03.892007   11844 round_trippers.go:580]     Audit-Id: de4dce9e-1a16-4ae8-8484-322bd0d44d7d
	I0203 12:08:03.892007   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:03.892007   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:03.892007   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:03.892007   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:03.892093   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"608","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3003 chars]
	I0203 12:08:04.388395   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:04.388395   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:04.388395   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:04.388395   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:04.392387   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:04.392387   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:04.392387   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:04.392387   11844 round_trippers.go:580]     Content-Length: 4027
	I0203 12:08:04.392387   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:04 GMT
	I0203 12:08:04.392387   11844 round_trippers.go:580]     Audit-Id: 66bc843f-d53f-4ea0-a03e-38d5e72d957e
	I0203 12:08:04.392387   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:04.392387   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:04.392387   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:04.392387   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"608","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3003 chars]
	I0203 12:08:04.888549   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:04.888549   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:04.888549   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:04.888549   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:04.892541   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:04.892541   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:04.892541   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:04 GMT
	I0203 12:08:04.892541   11844 round_trippers.go:580]     Audit-Id: 261cca5c-56de-4fad-bb2a-e0e4333764b5
	I0203 12:08:04.892541   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:04.892541   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:04.892541   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:04.892541   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:04.892541   11844 round_trippers.go:580]     Content-Length: 4027
	I0203 12:08:04.892541   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"608","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3003 chars]
	I0203 12:08:05.388663   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:05.388663   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:05.388663   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:05.388663   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:05.392651   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:05.392651   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:05.392651   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:05.392651   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:05.392651   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:05.392651   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:05.392651   11844 round_trippers.go:580]     Content-Length: 4027
	I0203 12:08:05.392651   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:05 GMT
	I0203 12:08:05.392651   11844 round_trippers.go:580]     Audit-Id: 3a9ca98b-5ba5-4670-983d-34641a7bcad1
	I0203 12:08:05.392651   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"608","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3003 chars]
	I0203 12:08:05.392651   11844 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:08:05.888234   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:05.888234   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:05.888234   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:05.888234   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:05.893042   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:05.893042   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:05.893042   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:05.893042   11844 round_trippers.go:580]     Content-Length: 4027
	I0203 12:08:05.893042   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:05 GMT
	I0203 12:08:05.893042   11844 round_trippers.go:580]     Audit-Id: c6de68be-0d45-4b99-a486-e5af5c142269
	I0203 12:08:05.893042   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:05.893042   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:05.893042   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:05.893311   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"608","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3003 chars]
	I0203 12:08:06.388152   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:06.388152   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:06.388152   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:06.388152   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:06.392464   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:06.392464   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:06.392522   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:06.392522   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:06.392522   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:06.392522   11844 round_trippers.go:580]     Content-Length: 4027
	I0203 12:08:06.392522   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:06 GMT
	I0203 12:08:06.392522   11844 round_trippers.go:580]     Audit-Id: 648dfe81-9ce3-40ec-b138-978d233ccae2
	I0203 12:08:06.392567   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:06.392919   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"608","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3003 chars]
	I0203 12:08:06.888552   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:06.888552   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:06.888552   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:06.888552   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:06.892598   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:06.892598   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:06.892598   11844 round_trippers.go:580]     Audit-Id: 148c29a5-48eb-4094-8cb0-ab4a38c6f69b
	I0203 12:08:06.892598   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:06.892598   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:06.892598   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:06.892598   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:06.892598   11844 round_trippers.go:580]     Content-Length: 4027
	I0203 12:08:06.892708   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:06 GMT
	I0203 12:08:06.892861   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"608","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3003 chars]
	I0203 12:08:07.388165   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:07.388165   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:07.388165   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:07.388165   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:07.392616   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:07.392693   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:07.392693   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:07 GMT
	I0203 12:08:07.392693   11844 round_trippers.go:580]     Audit-Id: 9f5f3440-11b4-4182-89cc-71b0c0b5c926
	I0203 12:08:07.392760   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:07.392760   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:07.392760   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:07.392760   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:07.392809   11844 round_trippers.go:580]     Content-Length: 4027
	I0203 12:08:07.392809   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"608","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3003 chars]
	I0203 12:08:07.392809   11844 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:08:07.888657   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:07.888657   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:07.888657   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:07.888657   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:08.007047   11844 round_trippers.go:574] Response Status: 200 OK in 118 milliseconds
	I0203 12:08:08.007047   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:08.007047   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:08.007047   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:08.007047   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:08.007047   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:08.007047   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:08 GMT
	I0203 12:08:08.007047   11844 round_trippers.go:580]     Audit-Id: 01988707-23c5-4dad-91b5-216992916e42
	I0203 12:08:08.007587   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:08.388231   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:08.388231   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:08.388231   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:08.388231   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:08.391985   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:08.391985   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:08.391985   11844 round_trippers.go:580]     Audit-Id: 9cc9096a-0b55-44af-a90c-7e23bd13bf6f
	I0203 12:08:08.391985   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:08.391985   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:08.391985   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:08.391985   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:08.391985   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:08 GMT
	I0203 12:08:08.393073   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:08.888822   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:08.888822   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:08.888822   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:08.888822   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:08.892620   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:08.892620   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:08.892620   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:08.892683   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:08.892683   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:08.892683   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:08.892683   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:08 GMT
	I0203 12:08:08.892683   11844 round_trippers.go:580]     Audit-Id: 190e9be0-70fc-4a23-8556-f55b94de6f9e
	I0203 12:08:08.892852   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:09.388119   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:09.388119   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:09.388119   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:09.388119   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:09.422165   11844 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0203 12:08:09.422270   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:09.422270   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:09.422270   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:09 GMT
	I0203 12:08:09.422270   11844 round_trippers.go:580]     Audit-Id: 566e0df4-6c9a-492f-a335-69f9d02f12d5
	I0203 12:08:09.422270   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:09.422270   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:09.422270   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:09.422495   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:09.422692   11844 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:08:09.887967   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:09.887967   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:09.887967   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:09.887967   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:09.892004   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:09.892004   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:09.892004   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:09.892004   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:09 GMT
	I0203 12:08:09.892004   11844 round_trippers.go:580]     Audit-Id: eb087e85-a977-4fb9-82d9-854daf3929b6
	I0203 12:08:09.892004   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:09.892004   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:09.892004   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:09.893027   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:10.389298   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:10.389298   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:10.389298   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:10.389298   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:10.392301   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:10.393311   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:10.393345   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:10.393345   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:10.393345   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:10 GMT
	I0203 12:08:10.393345   11844 round_trippers.go:580]     Audit-Id: b68a6679-3846-46a8-bdf9-ce98ba4be82b
	I0203 12:08:10.393345   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:10.393345   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:10.393571   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:10.888787   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:10.888787   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:10.888855   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:10.888855   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:10.891702   11844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:08:10.891790   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:10.891790   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:10.891790   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:10 GMT
	I0203 12:08:10.891790   11844 round_trippers.go:580]     Audit-Id: 82ae04ce-481a-43aa-88e6-cf31a6482621
	I0203 12:08:10.891790   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:10.891790   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:10.891790   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:10.891896   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:11.388386   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:11.388386   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:11.388386   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:11.388386   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:11.391868   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:11.392015   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:11.392015   11844 round_trippers.go:580]     Audit-Id: 7adf9720-2e6c-4340-8447-e1eeba08d4e0
	I0203 12:08:11.392015   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:11.392015   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:11.392015   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:11.392015   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:11.392109   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:11 GMT
	I0203 12:08:11.392269   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:11.888806   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:11.888806   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:11.888806   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:11.888806   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:11.893361   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:11.893495   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:11.893495   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:11.893495   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:11.893559   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:11 GMT
	I0203 12:08:11.893559   11844 round_trippers.go:580]     Audit-Id: 19ace79b-caf1-4553-b212-b7baa9a25b27
	I0203 12:08:11.893559   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:11.893559   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:11.893726   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:11.894023   11844 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:08:12.388689   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:12.388689   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:12.388689   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:12.388689   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:12.393740   11844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:08:12.393740   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:12.393740   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:12.393740   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:12.393861   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:12.393861   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:12 GMT
	I0203 12:08:12.393861   11844 round_trippers.go:580]     Audit-Id: d984d95d-fc84-4731-8029-02a2f1855238
	I0203 12:08:12.393861   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:12.394251   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:12.888816   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:12.888816   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:12.888816   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:12.888816   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:12.895845   11844 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 12:08:12.895937   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:12.895937   11844 round_trippers.go:580]     Audit-Id: 5630c185-b613-4688-b2fb-15945c385648
	I0203 12:08:12.895937   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:12.895937   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:12.895937   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:12.895937   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:12.895937   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:12 GMT
	I0203 12:08:12.895991   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:13.389187   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:13.389263   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:13.389263   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:13.389335   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:13.393315   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:13.393380   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:13.393380   11844 round_trippers.go:580]     Audit-Id: fa6c3c84-0fd2-4f13-b203-fdfc23c6e79f
	I0203 12:08:13.393380   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:13.393380   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:13.393380   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:13.393380   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:13.393380   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:13 GMT
	I0203 12:08:13.393749   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:13.888285   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:13.888285   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:13.888285   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:13.888285   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:13.892399   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:13.892399   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:13.892399   11844 round_trippers.go:580]     Audit-Id: 8405216c-b73a-4832-afdd-31852cd1a07c
	I0203 12:08:13.892399   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:13.892399   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:13.892399   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:13.892399   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:13.892399   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:13 GMT
	I0203 12:08:13.892714   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:14.388281   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:14.388281   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:14.388281   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:14.388281   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:14.392599   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:14.392647   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:14.392647   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:14.392647   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:14.392647   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:14.392647   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:14 GMT
	I0203 12:08:14.392647   11844 round_trippers.go:580]     Audit-Id: d999cdae-2ae2-4b51-9a08-e1b85927bf37
	I0203 12:08:14.392647   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:14.392754   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:14.393194   11844 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:08:14.889154   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:14.889154   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:14.889235   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:14.889235   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:14.896612   11844 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 12:08:14.896716   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:14.896716   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:14.896716   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:14.896716   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:14.896716   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:14.896716   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:14 GMT
	I0203 12:08:14.896797   11844 round_trippers.go:580]     Audit-Id: bcb3dd59-e518-45e8-8499-a202983cfbbd
	I0203 12:08:14.896912   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:15.388542   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:15.388542   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:15.388542   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:15.388542   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:15.394768   11844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:08:15.395628   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:15.395628   11844 round_trippers.go:580]     Audit-Id: 1411cd54-443b-40ed-9f43-8a1f598865c0
	I0203 12:08:15.395628   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:15.395628   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:15.395628   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:15.395628   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:15.395628   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:15 GMT
	I0203 12:08:15.396064   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:15.888768   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:15.888768   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:15.888768   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:15.888768   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:15.892724   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:15.892960   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:15.892960   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:15 GMT
	I0203 12:08:15.892960   11844 round_trippers.go:580]     Audit-Id: 3e1e7165-a249-4246-b0fd-b625759c61a9
	I0203 12:08:15.892960   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:15.892960   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:15.892960   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:15.892960   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:15.893191   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:16.388935   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:16.388935   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:16.388935   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:16.388935   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:16.394548   11844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:08:16.394711   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:16.394711   11844 round_trippers.go:580]     Audit-Id: e8fa5ae1-77b1-4fff-be10-9f64386467b0
	I0203 12:08:16.394711   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:16.394711   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:16.394711   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:16.394711   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:16.394711   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:16 GMT
	I0203 12:08:16.394923   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:16.394923   11844 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:08:16.889183   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:16.889676   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:16.889676   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:16.889676   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:16.895333   11844 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:08:16.895333   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:16.895333   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:16.895333   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:16.895333   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:16 GMT
	I0203 12:08:16.895333   11844 round_trippers.go:580]     Audit-Id: 904b513a-9932-4ab9-9942-f00efc6a06d1
	I0203 12:08:16.895333   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:16.895333   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:16.896068   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:17.388933   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:17.388933   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:17.388933   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:17.388933   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:17.392797   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:17.392797   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:17.392797   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:17.392797   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:17.392797   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:17.392797   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:17.392885   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:17 GMT
	I0203 12:08:17.392885   11844 round_trippers.go:580]     Audit-Id: deeb0506-e970-4c65-a6d6-b6b1a7f55a68
	I0203 12:08:17.392989   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:17.888441   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:17.888441   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:17.888441   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:17.888527   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:17.892034   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:17.892034   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:17.892034   11844 round_trippers.go:580]     Audit-Id: 0bfc1caa-f226-4749-ad33-6389f9338761
	I0203 12:08:17.892034   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:17.892137   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:17.892137   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:17.892137   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:17.892137   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:17 GMT
	I0203 12:08:17.892469   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:18.388408   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:18.388408   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:18.388408   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:18.388408   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:18.392882   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:18.392976   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:18.392976   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:18.392976   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:18.392976   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:18 GMT
	I0203 12:08:18.392976   11844 round_trippers.go:580]     Audit-Id: 9680002a-9fab-432b-abc8-9079f657d18a
	I0203 12:08:18.392976   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:18.392976   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:18.393175   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:18.888236   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:18.888698   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:18.888698   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:18.888698   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:18.891935   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:18.891935   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:18.891935   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:18.891935   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:18.891935   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:18 GMT
	I0203 12:08:18.891935   11844 round_trippers.go:580]     Audit-Id: 9808ffa5-106a-4781-b37d-99ab7d88a25d
	I0203 12:08:18.891935   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:18.891935   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:18.892936   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:18.893273   11844 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:08:19.388451   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:19.388533   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:19.388533   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:19.388533   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:19.392748   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:19.392748   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:19.392748   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:19.392748   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:19.392748   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:19 GMT
	I0203 12:08:19.392748   11844 round_trippers.go:580]     Audit-Id: b91f9499-1d7d-4ff2-be61-7fd0c9293025
	I0203 12:08:19.392748   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:19.392748   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:19.392986   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:19.888889   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:19.888889   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:19.888889   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:19.888889   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:19.893072   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:19.893072   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:19.893784   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:19.893784   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:19 GMT
	I0203 12:08:19.893784   11844 round_trippers.go:580]     Audit-Id: d5e58707-b538-4a7a-8093-b20b4d0afdfc
	I0203 12:08:19.893784   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:19.893784   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:19.893784   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:19.894510   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:20.389322   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:20.389386   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:20.389386   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:20.389386   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:20.393792   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:20.393895   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:20.393895   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:20 GMT
	I0203 12:08:20.393895   11844 round_trippers.go:580]     Audit-Id: 43539694-36a7-4bd6-8c11-00e0654e4a0d
	I0203 12:08:20.393895   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:20.393895   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:20.393895   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:20.393895   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:20.393957   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:20.888251   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:20.888251   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:20.888251   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:20.888251   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:20.893051   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:20.893051   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:20.893051   11844 round_trippers.go:580]     Audit-Id: 4534f7ce-4d52-4c06-8dc7-fc59465d7f3d
	I0203 12:08:20.893051   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:20.893051   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:20.893051   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:20.893051   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:20.893051   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:20 GMT
	I0203 12:08:20.893501   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:20.893786   11844 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:08:21.388405   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:21.388405   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:21.388405   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:21.388405   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:21.392317   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:21.392405   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:21.392405   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:21.392405   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:21.392405   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:21.392405   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:21 GMT
	I0203 12:08:21.392405   11844 round_trippers.go:580]     Audit-Id: ef551899-5266-4745-998e-c23c334db6c4
	I0203 12:08:21.392405   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:21.392597   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:21.888225   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:21.888225   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:21.888225   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:21.888225   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:21.892668   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:21.892668   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:21.892668   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:21.892668   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:21 GMT
	I0203 12:08:21.892668   11844 round_trippers.go:580]     Audit-Id: 1859cf02-754a-4543-b7be-96b06f524e3a
	I0203 12:08:21.892668   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:21.892668   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:21.892809   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:21.892916   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:22.388223   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:22.388223   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:22.388223   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:22.388223   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:22.393133   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:22.393133   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:22.393133   11844 round_trippers.go:580]     Audit-Id: 3f78585f-9bbd-4900-85ae-3932ec77f38c
	I0203 12:08:22.393133   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:22.393133   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:22.393133   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:22.393133   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:22.393133   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:22 GMT
	I0203 12:08:22.393426   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:22.888112   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:22.888112   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:22.888112   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:22.888112   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:22.891736   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:22.891736   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:22.891736   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:22.891736   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:22.891818   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:22.891818   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:22 GMT
	I0203 12:08:22.891818   11844 round_trippers.go:580]     Audit-Id: 52a2311a-922d-4f93-b231-a4de83fcab97
	I0203 12:08:22.891818   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:22.892145   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:23.388885   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:23.388885   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:23.388885   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:23.388885   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:23.393185   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:23.393185   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:23.393185   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:23.393185   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:23.393185   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:23.393185   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:23 GMT
	I0203 12:08:23.393185   11844 round_trippers.go:580]     Audit-Id: 4d7a7756-730a-4d54-8e11-7ed0dd0252a1
	I0203 12:08:23.393185   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:23.393400   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:23.393767   11844 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:08:23.888355   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:23.888355   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:23.888355   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:23.888355   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:23.893150   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:23.893150   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:23.893150   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:23.893150   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:23.893150   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:23.893150   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:23.893150   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:23 GMT
	I0203 12:08:23.893150   11844 round_trippers.go:580]     Audit-Id: 8f3055df-8b39-4f6d-b7a5-da3ad4c0120b
	I0203 12:08:23.893150   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:24.388889   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:24.388889   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:24.388889   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:24.388889   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:24.393436   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:24.393436   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:24.393557   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:24 GMT
	I0203 12:08:24.393557   11844 round_trippers.go:580]     Audit-Id: 72092f57-07d5-4926-bcce-f5180c5078c5
	I0203 12:08:24.393557   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:24.393557   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:24.393601   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:24.393630   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:24.393926   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:24.888867   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:24.889278   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:24.889278   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:24.889278   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:24.892317   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:24.892694   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:24.892694   11844 round_trippers.go:580]     Audit-Id: ab0b3bb4-6531-4631-915a-29676e00d5c5
	I0203 12:08:24.892694   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:24.892694   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:24.892788   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:24.892788   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:24.892788   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:24 GMT
	I0203 12:08:24.893004   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:25.388281   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:25.388281   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:25.388281   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:25.388281   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:25.392609   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:25.392914   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:25.392914   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:25.392914   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:25 GMT
	I0203 12:08:25.392914   11844 round_trippers.go:580]     Audit-Id: d7e34ab5-ddab-4bab-9abc-d58b4f2e4d6c
	I0203 12:08:25.392914   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:25.392914   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:25.392914   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:25.393129   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:25.888798   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:25.888798   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:25.888798   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:25.888798   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:25.893390   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:25.893461   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:25.893461   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:25.893461   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:25 GMT
	I0203 12:08:25.893550   11844 round_trippers.go:580]     Audit-Id: e87d706f-6c02-4d92-87d6-1082452b5ea8
	I0203 12:08:25.893550   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:25.893550   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:25.893550   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:25.893755   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:25.894138   11844 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:08:26.389407   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:26.389407   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:26.389407   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:26.389407   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:26.393727   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:26.393797   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:26.393797   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:26.393797   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:26.393797   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:26 GMT
	I0203 12:08:26.393797   11844 round_trippers.go:580]     Audit-Id: ab55aa6f-6840-4b4f-8f11-07d43467fef9
	I0203 12:08:26.393797   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:26.393797   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:26.393797   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"618","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3395 chars]
	I0203 12:08:26.888968   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:26.888968   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:26.888968   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:26.888968   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:26.892173   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:26.892333   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:26.892333   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:26.892333   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:26.892333   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:26.892333   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:26 GMT
	I0203 12:08:26.892333   11844 round_trippers.go:580]     Audit-Id: c72365bf-6390-4049-a0c0-d47923288276
	I0203 12:08:26.892447   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:26.892702   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"648","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3261 chars]
	I0203 12:08:26.893436   11844 node_ready.go:49] node "multinode-749300-m02" has status "Ready":"True"
	I0203 12:08:26.893489   11844 node_ready.go:38] duration metric: took 28.5054225s for node "multinode-749300-m02" to be "Ready" ...
	I0203 12:08:26.893489   11844 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 12:08:26.893641   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods
	I0203 12:08:26.893679   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:26.893679   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:26.893679   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:26.897970   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:26.897970   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:26.897970   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:26.897970   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:26.897970   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:26.897970   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:26 GMT
	I0203 12:08:26.897970   11844 round_trippers.go:580]     Audit-Id: 33137844-f979-42bc-9743-01e35c945991
	I0203 12:08:26.897970   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:26.899964   11844 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"649"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"447","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 72876 chars]
	I0203 12:08:26.903123   11844 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace to be "Ready" ...
	I0203 12:08:26.903271   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:08:26.903271   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:26.903271   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:26.903271   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:26.906753   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:26.907728   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:26.907728   11844 round_trippers.go:580]     Audit-Id: f030d44a-2d94-49d2-a47e-f54604e47219
	I0203 12:08:26.907728   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:26.907728   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:26.907728   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:26.907728   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:26.907728   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:26 GMT
	I0203 12:08:26.907728   11844 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"447","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6830 chars]
	I0203 12:08:26.908517   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:08:26.908546   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:26.908546   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:26.908546   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:26.911315   11844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:08:26.911315   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:26.911315   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:26.911315   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:26.911315   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:26 GMT
	I0203 12:08:26.911315   11844 round_trippers.go:580]     Audit-Id: 9baa5114-53eb-460e-b276-f208119ce862
	I0203 12:08:26.911315   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:26.911315   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:26.911315   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"454","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4956 chars]
	I0203 12:08:26.912354   11844 pod_ready.go:93] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"True"
	I0203 12:08:26.912354   11844 pod_ready.go:82] duration metric: took 9.2307ms for pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace to be "Ready" ...
	I0203 12:08:26.912354   11844 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:08:26.912354   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-749300
	I0203 12:08:26.912354   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:26.912354   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:26.912354   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:26.914922   11844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:08:26.915755   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:26.915755   11844 round_trippers.go:580]     Audit-Id: 310ef02e-7c3e-4393-83c6-60bc2f667d44
	I0203 12:08:26.915755   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:26.915755   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:26.915755   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:26.915755   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:26.915755   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:26 GMT
	I0203 12:08:26.915962   11844 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-749300","namespace":"kube-system","uid":"c751851c-68ee-4c15-80ca-32642fcf2a5a","resourceVersion":"372","creationTimestamp":"2025-02-03T12:04:55Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.1.53:2379","kubernetes.io/config.hash":"cea8016677ee73c66077ce584fb15354","kubernetes.io/config.mirror":"cea8016677ee73c66077ce584fb15354","kubernetes.io/config.seen":"2025-02-03T12:04:55.455014244Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:04:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cli
ent-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.m [truncated 6443 chars]
	I0203 12:08:26.916417   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:08:26.916483   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:26.916483   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:26.916483   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:26.918939   11844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:08:26.918939   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:26.918939   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:26.918939   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:26.918939   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:26.918939   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:26 GMT
	I0203 12:08:26.918939   11844 round_trippers.go:580]     Audit-Id: 3264ecd5-ce95-451d-961e-50f847218fcd
	I0203 12:08:26.918939   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:26.919539   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"454","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4956 chars]
	I0203 12:08:26.919539   11844 pod_ready.go:93] pod "etcd-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:08:26.919539   11844 pod_ready.go:82] duration metric: took 7.1849ms for pod "etcd-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:08:26.919539   11844 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:08:26.919539   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-749300
	I0203 12:08:26.919539   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:26.920509   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:26.920509   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:26.922873   11844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:08:26.922873   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:26.922873   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:26.922873   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:26.922873   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:26.922873   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:26.922873   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:26 GMT
	I0203 12:08:26.922873   11844 round_trippers.go:580]     Audit-Id: 0d8d0d17-13c0-42eb-872f-698045f20ff9
	I0203 12:08:26.922873   11844 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-749300","namespace":"kube-system","uid":"b18ba461-b225-4090-8341-159171502b52","resourceVersion":"402","creationTimestamp":"2025-02-03T12:04:55Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.1.53:8443","kubernetes.io/config.hash":"a8703dd831250f30e213efd5fca131d7","kubernetes.io/config.mirror":"a8703dd831250f30e213efd5fca131d7","kubernetes.io/config.seen":"2025-02-03T12:04:55.455019045Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:04:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kuber
netes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.io [truncated 7674 chars]
	I0203 12:08:26.923854   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:08:26.923902   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:26.923902   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:26.923934   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:26.926332   11844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:08:26.926332   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:26.926332   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:26.926332   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:26 GMT
	I0203 12:08:26.926332   11844 round_trippers.go:580]     Audit-Id: b9faa0a6-44d8-443b-9e28-8539e2109d59
	I0203 12:08:26.926332   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:26.926332   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:26.926332   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:26.926332   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"454","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4956 chars]
	I0203 12:08:26.926893   11844 pod_ready.go:93] pod "kube-apiserver-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:08:26.926893   11844 pod_ready.go:82] duration metric: took 7.3543ms for pod "kube-apiserver-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:08:26.926893   11844 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:08:26.927061   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-749300
	I0203 12:08:26.927106   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:26.927139   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:26.927139   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:26.929763   11844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:08:26.929834   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:26.929834   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:26.929834   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:26.929834   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:26 GMT
	I0203 12:08:26.929834   11844 round_trippers.go:580]     Audit-Id: 88801d3b-bb7d-463d-b045-87ba65f5c382
	I0203 12:08:26.929834   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:26.929834   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:26.929834   11844 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-749300","namespace":"kube-system","uid":"63c0818c-a0e6-40d1-b0c4-1cd633c91afb","resourceVersion":"405","creationTimestamp":"2025-02-03T12:04:55Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c25845f184856fc216b76acafcf34ee9","kubernetes.io/config.mirror":"c25845f184856fc216b76acafcf34ee9","kubernetes.io/config.seen":"2025-02-03T12:04:55.455020645Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:04:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7254 chars]
	I0203 12:08:26.930563   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:08:26.930609   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:26.930609   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:26.930658   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:26.933238   11844 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:08:26.933238   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:26.933238   11844 round_trippers.go:580]     Audit-Id: d9b9544e-9a8e-49a8-8708-e99b81e40291
	I0203 12:08:26.933238   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:26.933238   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:26.933238   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:26.933238   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:26.933238   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:26 GMT
	I0203 12:08:26.933238   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"454","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4956 chars]
	I0203 12:08:26.933238   11844 pod_ready.go:93] pod "kube-controller-manager-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:08:26.933238   11844 pod_ready.go:82] duration metric: took 6.345ms for pod "kube-controller-manager-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:08:26.933238   11844 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9g92t" in "kube-system" namespace to be "Ready" ...
	I0203 12:08:27.089645   11844 request.go:632] Waited for 155.5654ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g92t
	I0203 12:08:27.089645   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g92t
	I0203 12:08:27.090025   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:27.090025   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:27.090025   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:27.103647   11844 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0203 12:08:27.103718   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:27.103787   11844 round_trippers.go:580]     Audit-Id: 2cfa75fc-eb0e-48e9-8110-9967fdbd3be2
	I0203 12:08:27.103787   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:27.103787   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:27.103787   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:27.103787   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:27.103787   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:27 GMT
	I0203 12:08:27.104052   11844 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9g92t","generateName":"kube-proxy-","namespace":"kube-system","uid":"1709b874-4fee-41f5-8d30-24912b2fa725","resourceVersion":"400","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"04519c88-48ba-439f-bd57-a9c8b286d988","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04519c88-48ba-439f-bd57-a9c8b286d988\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6184 chars]
	I0203 12:08:27.289587   11844 request.go:632] Waited for 184.8749ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:08:27.289587   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:08:27.289587   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:27.290059   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:27.290144   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:27.293612   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:27.293612   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:27.293612   11844 round_trippers.go:580]     Audit-Id: ed83c853-575a-42fc-9fee-694db826ed8d
	I0203 12:08:27.293612   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:27.293612   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:27.293612   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:27.293612   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:27.293612   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:27 GMT
	I0203 12:08:27.293723   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"454","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4956 chars]
	I0203 12:08:27.294511   11844 pod_ready.go:93] pod "kube-proxy-9g92t" in "kube-system" namespace has status "Ready":"True"
	I0203 12:08:27.294511   11844 pod_ready.go:82] duration metric: took 361.2689ms for pod "kube-proxy-9g92t" in "kube-system" namespace to be "Ready" ...
	I0203 12:08:27.294576   11844 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ggnq7" in "kube-system" namespace to be "Ready" ...
	I0203 12:08:27.489527   11844 request.go:632] Waited for 194.8765ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggnq7
	I0203 12:08:27.489527   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggnq7
	I0203 12:08:27.489527   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:27.489527   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:27.489527   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:27.493663   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:27.493663   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:27.493663   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:27.493954   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:27.493954   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:27.493954   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:27 GMT
	I0203 12:08:27.493954   11844 round_trippers.go:580]     Audit-Id: 28a4881b-b317-4b26-8e00-828e7cd3b0fe
	I0203 12:08:27.493954   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:27.494287   11844 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ggnq7","generateName":"kube-proxy-","namespace":"kube-system","uid":"63bc9e77-90e3-40c5-9b49-e95d2bfd7426","resourceVersion":"625","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"04519c88-48ba-439f-bd57-a9c8b286d988","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04519c88-48ba-439f-bd57-a9c8b286d988\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6192 chars]
	I0203 12:08:27.689901   11844 request.go:632] Waited for 195.118ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:27.689901   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:08:27.689901   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:27.689901   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:27.689901   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:27.693583   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:27.693583   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:27.693583   11844 round_trippers.go:580]     Audit-Id: 2513d703-1c21-49ec-ad8a-6c804b058ef2
	I0203 12:08:27.693583   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:27.693583   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:27.693583   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:27.693583   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:27.693583   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:27 GMT
	I0203 12:08:27.693804   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"648","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3261 chars]
	I0203 12:08:27.694163   11844 pod_ready.go:93] pod "kube-proxy-ggnq7" in "kube-system" namespace has status "Ready":"True"
	I0203 12:08:27.694163   11844 pod_ready.go:82] duration metric: took 399.5825ms for pod "kube-proxy-ggnq7" in "kube-system" namespace to be "Ready" ...
	I0203 12:08:27.694163   11844 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:08:27.889162   11844 request.go:632] Waited for 194.8881ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-749300
	I0203 12:08:27.889162   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-749300
	I0203 12:08:27.889162   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:27.889162   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:27.889162   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:27.893658   11844 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:08:27.893736   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:27.893736   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:27.893736   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:27.893736   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:27 GMT
	I0203 12:08:27.893736   11844 round_trippers.go:580]     Audit-Id: a209f945-8c94-4535-836e-26b53db924ed
	I0203 12:08:27.893736   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:27.893736   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:27.894006   11844 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-749300","namespace":"kube-system","uid":"8e4c1052-9dca-466d-833b-eff318b977d7","resourceVersion":"328","creationTimestamp":"2025-02-03T12:04:55Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a4dc8a8db691940bb17375ec22c0921e","kubernetes.io/config.mirror":"a4dc8a8db691940bb17375ec22c0921e","kubernetes.io/config.seen":"2025-02-03T12:04:55.455022345Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:04:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5347 chars]
	I0203 12:08:28.089769   11844 request.go:632] Waited for 195.1755ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:08:28.089769   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes/multinode-749300
	I0203 12:08:28.089769   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:28.089769   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:28.089769   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:28.096591   11844 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:08:28.096591   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:28.096591   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:28 GMT
	I0203 12:08:28.096591   11844 round_trippers.go:580]     Audit-Id: 693ba138-062a-462f-ab8a-2acbe69a2957
	I0203 12:08:28.096591   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:28.096591   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:28.096591   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:28.096591   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:28.096591   11844 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"454","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","fi [truncated 4956 chars]
	I0203 12:08:28.097335   11844 pod_ready.go:93] pod "kube-scheduler-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:08:28.097335   11844 pod_ready.go:82] duration metric: took 403.1675ms for pod "kube-scheduler-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:08:28.097335   11844 pod_ready.go:39] duration metric: took 1.2038329s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 12:08:28.097335   11844 system_svc.go:44] waiting for kubelet service to be running ....
	I0203 12:08:28.106178   11844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 12:08:28.132362   11844 system_svc.go:56] duration metric: took 35.0269ms WaitForService to wait for kubelet
	I0203 12:08:28.132362   11844 kubeadm.go:582] duration metric: took 29.9818246s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 12:08:28.132362   11844 node_conditions.go:102] verifying NodePressure condition ...
	I0203 12:08:28.289594   11844 request.go:632] Waited for 157.2296ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.1.53:8443/api/v1/nodes
	I0203 12:08:28.289812   11844 round_trippers.go:463] GET https://172.25.1.53:8443/api/v1/nodes
	I0203 12:08:28.289812   11844 round_trippers.go:469] Request Headers:
	I0203 12:08:28.289812   11844 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:08:28.289812   11844 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:08:28.293513   11844 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:08:28.294468   11844 round_trippers.go:577] Response Headers:
	I0203 12:08:28.294468   11844 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:08:28 GMT
	I0203 12:08:28.294468   11844 round_trippers.go:580]     Audit-Id: b41f771a-506b-4b1e-beae-bf8191424341
	I0203 12:08:28.294468   11844 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:08:28.294468   11844 round_trippers.go:580]     Content-Type: application/json
	I0203 12:08:28.294468   11844 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:08:28.294468   11844 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:08:28.294468   11844 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"652"},"items":[{"metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"454","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9776 chars]
	I0203 12:08:28.295476   11844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 12:08:28.295476   11844 node_conditions.go:123] node cpu capacity is 2
	I0203 12:08:28.295476   11844 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 12:08:28.295476   11844 node_conditions.go:123] node cpu capacity is 2
	I0203 12:08:28.295476   11844 node_conditions.go:105] duration metric: took 163.1119ms to run NodePressure ...
	I0203 12:08:28.295476   11844 start.go:241] waiting for startup goroutines ...
	I0203 12:08:28.295476   11844 start.go:255] writing updated cluster config ...
	I0203 12:08:28.303470   11844 ssh_runner.go:195] Run: rm -f paused
	I0203 12:08:28.426634   11844 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0203 12:08:28.430903   11844 out.go:177] * Done! kubectl is now configured to use "multinode-749300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 03 12:05:22 multinode-749300 dockerd[1456]: time="2025-02-03T12:05:22.465281015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 12:05:22 multinode-749300 dockerd[1456]: time="2025-02-03T12:05:22.475040898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 03 12:05:22 multinode-749300 dockerd[1456]: time="2025-02-03T12:05:22.475134998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 03 12:05:22 multinode-749300 dockerd[1456]: time="2025-02-03T12:05:22.475153999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 12:05:22 multinode-749300 dockerd[1456]: time="2025-02-03T12:05:22.475256799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 12:05:22 multinode-749300 cri-dockerd[1349]: time="2025-02-03T12:05:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a166f3c8776d2abb8f173e76ba48d9aa5c71b04d34638145a7d22b947e0b1e16/resolv.conf as [nameserver 172.25.0.1]"
	Feb 03 12:05:22 multinode-749300 cri-dockerd[1349]: time="2025-02-03T12:05:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/26e5557dc32ce42e41eb095169017d71cd452b2e90ecede8972ab6dfa8c841ac/resolv.conf as [nameserver 172.25.0.1]"
	Feb 03 12:05:22 multinode-749300 dockerd[1456]: time="2025-02-03T12:05:22.794769928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 03 12:05:22 multinode-749300 dockerd[1456]: time="2025-02-03T12:05:22.794897829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 03 12:05:22 multinode-749300 dockerd[1456]: time="2025-02-03T12:05:22.794918629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 12:05:22 multinode-749300 dockerd[1456]: time="2025-02-03T12:05:22.795226631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 12:05:22 multinode-749300 dockerd[1456]: time="2025-02-03T12:05:22.959263317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 03 12:05:22 multinode-749300 dockerd[1456]: time="2025-02-03T12:05:22.959350318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 03 12:05:22 multinode-749300 dockerd[1456]: time="2025-02-03T12:05:22.959363818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 12:05:22 multinode-749300 dockerd[1456]: time="2025-02-03T12:05:22.959825122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 12:08:51 multinode-749300 dockerd[1456]: time="2025-02-03T12:08:51.573104313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 03 12:08:51 multinode-749300 dockerd[1456]: time="2025-02-03T12:08:51.573196313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 03 12:08:51 multinode-749300 dockerd[1456]: time="2025-02-03T12:08:51.573210913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 12:08:51 multinode-749300 dockerd[1456]: time="2025-02-03T12:08:51.573384514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 12:08:51 multinode-749300 cri-dockerd[1349]: time="2025-02-03T12:08:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/efcd217a3204d8ee4b03ebb412109a32b1b008fc65b7434e2087e8fa5429c03b/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 03 12:08:53 multinode-749300 cri-dockerd[1349]: time="2025-02-03T12:08:53Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Feb 03 12:08:53 multinode-749300 dockerd[1456]: time="2025-02-03T12:08:53.421359537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 03 12:08:53 multinode-749300 dockerd[1456]: time="2025-02-03T12:08:53.421525039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 03 12:08:53 multinode-749300 dockerd[1456]: time="2025-02-03T12:08:53.421574839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 12:08:53 multinode-749300 dockerd[1456]: time="2025-02-03T12:08:53.421895242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f42690726d50f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   46 seconds ago      Running             busybox                   0                   efcd217a3204d       busybox-58667487b6-zgvmd
	fe91a8d012aee       c69fa2e9cbf5f                                                                                         4 minutes ago       Running             coredns                   0                   26e5557dc32ce       coredns-668d6bf9bc-v2gkp
	a6484d4fc4d7f       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   a166f3c8776d2       storage-provisioner
	fab2d9be6b5c7       kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26              4 minutes ago       Running             kindnet-cni               0                   cb49b32ba0852       kindnet-h6m57
	c6dc514e98f69       e29f9c7391fd9                                                                                         4 minutes ago       Running             kube-proxy                0                   1ff01fa7d8c67       kube-proxy-9g92t
	8ade10c0fb096       019ee182b58e2                                                                                         4 minutes ago       Running             kube-controller-manager   0                   b1b473818438d       kube-controller-manager-multinode-749300
	88c40ca9aa3cb       2b0d6572d062c                                                                                         4 minutes ago       Running             kube-scheduler            0                   d8d9e598659ff       kube-scheduler-multinode-749300
	ebc67da1b9e9a       a9e7e6b294baf                                                                                         4 minutes ago       Running             etcd                      0                   16d03cfd685dc       etcd-multinode-749300
	e3efb81aa459a       95c0bda56fc4d                                                                                         4 minutes ago       Running             kube-apiserver            0                   d3c93fcfaa46c       kube-apiserver-multinode-749300
	
	
	==> coredns [fe91a8d012ae] <==
	[INFO] 10.244.1.2:54547 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108101s
	[INFO] 10.244.0.3:52767 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140901s
	[INFO] 10.244.0.3:48199 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000275502s
	[INFO] 10.244.0.3:40769 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194202s
	[INFO] 10.244.0.3:56613 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000241303s
	[INFO] 10.244.0.3:36390 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000127501s
	[INFO] 10.244.0.3:49253 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150501s
	[INFO] 10.244.0.3:53291 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115601s
	[INFO] 10.244.0.3:37098 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000782s
	[INFO] 10.244.1.2:47927 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154002s
	[INFO] 10.244.1.2:49855 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156202s
	[INFO] 10.244.1.2:51176 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114201s
	[INFO] 10.244.1.2:45626 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156701s
	[INFO] 10.244.0.3:33142 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141402s
	[INFO] 10.244.0.3:36637 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000249602s
	[INFO] 10.244.0.3:34293 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135301s
	[INFO] 10.244.0.3:59245 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112701s
	[INFO] 10.244.1.2:56139 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200702s
	[INFO] 10.244.1.2:53567 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131301s
	[INFO] 10.244.1.2:55778 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000182502s
	[INFO] 10.244.1.2:53486 - 5 "PTR IN 1.0.25.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000163702s
	[INFO] 10.244.0.3:52745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191702s
	[INFO] 10.244.0.3:38587 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132301s
	[INFO] 10.244.0.3:53685 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078101s
	[INFO] 10.244.0.3:38406 - 5 "PTR IN 1.0.25.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000076301s
	
	
	==> describe nodes <==
	Name:               multinode-749300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-749300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	                    minikube.k8s.io/name=multinode-749300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_03T12_04_56_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Feb 2025 12:04:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-749300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Feb 2025 12:09:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Feb 2025 12:09:01 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Feb 2025 12:09:01 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Feb 2025 12:09:01 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Feb 2025 12:09:01 +0000   Mon, 03 Feb 2025 12:05:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.1.53
	  Hostname:    multinode-749300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a68f8adc4a34e5e9481743c34866de9
	  System UUID:                69ffc0f0-a1d7-9e4e-97f3-ed54041f4203
	  Boot ID:                    e713b078-6545-49f3-90ca-b5c9e1d54d4f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-zgvmd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 coredns-668d6bf9bc-v2gkp                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m39s
	  kube-system                 etcd-multinode-749300                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m44s
	  kube-system                 kindnet-h6m57                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m39s
	  kube-system                 kube-apiserver-multinode-749300             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 kube-controller-manager-multinode-749300    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 kube-proxy-9g92t                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 kube-scheduler-multinode-749300             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m37s                  kube-proxy       
	  Normal  Starting                 4m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m52s (x8 over 4m52s)  kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m52s (x8 over 4m52s)  kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m52s (x7 over 4m52s)  kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m44s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m44s                  kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m44s                  kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m44s                  kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m40s                  node-controller  Node multinode-749300 event: Registered Node multinode-749300 in Controller
	  Normal  NodeReady                4m18s                  kubelet          Node multinode-749300 status is now: NodeReady
	
	
	Name:               multinode-749300-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-749300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	                    minikube.k8s.io/name=multinode-749300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_02_03T12_07_57_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Feb 2025 12:07:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-749300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Feb 2025 12:09:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Feb 2025 12:08:58 +0000   Mon, 03 Feb 2025 12:07:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Feb 2025 12:08:58 +0000   Mon, 03 Feb 2025 12:07:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Feb 2025 12:08:58 +0000   Mon, 03 Feb 2025 12:07:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Feb 2025 12:08:58 +0000   Mon, 03 Feb 2025 12:08:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.8.35
	  Hostname:    multinode-749300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 90c62936ba5d4d0aaeb17fe1abbb7ffd
	  System UUID:                4e05b2a5-08ff-3741-b04f-b8bc068a3e3b
	  Boot ID:                    4aec9dc0-92f8-4c4d-b16a-206948ca045d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-c66bf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 kindnet-dc9wq               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      102s
	  kube-system                 kube-proxy-ggnq7            0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 90s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  102s (x2 over 102s)  kubelet          Node multinode-749300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x2 over 102s)  kubelet          Node multinode-749300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x2 over 102s)  kubelet          Node multinode-749300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  102s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           100s                 node-controller  Node multinode-749300-m02 event: Registered Node multinode-749300-m02 in Controller
	  Normal  NodeReady                73s                  kubelet          Node multinode-749300-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.638794] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +43.239903] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.187928] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[Feb 3 12:04] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	[  +0.093681] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.479661] systemd-fstab-generator[1054]: Ignoring "noauto" option for root device
	[  +0.194400] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	[  +0.217114] systemd-fstab-generator[1080]: Ignoring "noauto" option for root device
	[  +2.889872] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.195808] systemd-fstab-generator[1314]: Ignoring "noauto" option for root device
	[  +0.204829] systemd-fstab-generator[1326]: Ignoring "noauto" option for root device
	[  +0.267784] systemd-fstab-generator[1341]: Ignoring "noauto" option for root device
	[ +10.906116] systemd-fstab-generator[1442]: Ignoring "noauto" option for root device
	[  +0.097509] kauditd_printk_skb: 206 callbacks suppressed
	[  +3.663360] systemd-fstab-generator[1703]: Ignoring "noauto" option for root device
	[  +5.746586] systemd-fstab-generator[1852]: Ignoring "noauto" option for root device
	[  +0.105243] kauditd_printk_skb: 74 callbacks suppressed
	[  +8.035117] systemd-fstab-generator[2278]: Ignoring "noauto" option for root device
	[  +0.139722] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.008532] systemd-fstab-generator[2379]: Ignoring "noauto" option for root device
	[Feb 3 12:05] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.983780] kauditd_printk_skb: 51 callbacks suppressed
	[Feb 3 12:08] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [ebc67da1b9e9] <==
	{"level":"info","ts":"2025-02-03T12:04:50.695396Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-03T12:04:50.695463Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-03T12:04:50.695558Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-03T12:04:50.696799Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-03T12:04:50.698130Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.1.53:2379"}
	{"level":"info","ts":"2025-02-03T12:04:50.698660Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bd3b09816c9d03a4","local-member-id":"aee9b6e79987349e","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-03T12:04:50.698924Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-03T12:04:50.699079Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-03T12:04:50.698961Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-03T12:04:50.700021Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-03T12:05:34.713537Z","caller":"traceutil/trace.go:171","msg":"trace[700555778] linearizableReadLoop","detail":"{readStateIndex:476; appliedIndex:475; }","duration":"154.323613ms","start":"2025-02-03T12:05:34.559196Z","end":"2025-02-03T12:05:34.713519Z","steps":["trace[700555778] 'read index received'  (duration: 154.117712ms)","trace[700555778] 'applied index is now lower than readState.Index'  (duration: 204.601µs)"],"step_count":2}
	{"level":"warn","ts":"2025-02-03T12:05:34.714386Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.152818ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-03T12:05:34.714448Z","caller":"traceutil/trace.go:171","msg":"trace[523144219] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:460; }","duration":"155.243319ms","start":"2025-02-03T12:05:34.559190Z","end":"2025-02-03T12:05:34.714433Z","steps":["trace[523144219] 'agreement among raft nodes before linearized reading'  (duration: 155.132218ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-03T12:05:37.216597Z","caller":"traceutil/trace.go:171","msg":"trace[607725788] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"173.559997ms","start":"2025-02-03T12:05:37.043015Z","end":"2025-02-03T12:05:37.216575Z","steps":["trace[607725788] 'process raft request'  (duration: 173.363796ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-03T12:07:50.721229Z","caller":"traceutil/trace.go:171","msg":"trace[1591962634] linearizableReadLoop","detail":"{readStateIndex:612; appliedIndex:611; }","duration":"161.787916ms","start":"2025-02-03T12:07:50.559422Z","end":"2025-02-03T12:07:50.721210Z","steps":["trace[1591962634] 'read index received'  (duration: 161.631616ms)","trace[1591962634] 'applied index is now lower than readState.Index'  (duration: 155.7µs)"],"step_count":2}
	{"level":"warn","ts":"2025-02-03T12:07:50.721473Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.033917ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-03T12:07:50.721990Z","caller":"traceutil/trace.go:171","msg":"trace[372476520] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:569; }","duration":"162.562118ms","start":"2025-02-03T12:07:50.559415Z","end":"2025-02-03T12:07:50.721977Z","steps":["trace[372476520] 'agreement among raft nodes before linearized reading'  (duration: 161.990517ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-03T12:07:50.722810Z","caller":"traceutil/trace.go:171","msg":"trace[604091346] transaction","detail":"{read_only:false; response_revision:569; number_of_response:1; }","duration":"382.746949ms","start":"2025-02-03T12:07:50.340051Z","end":"2025-02-03T12:07:50.722798Z","steps":["trace[604091346] 'process raft request'  (duration: 381.055445ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-03T12:07:50.724546Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-03T12:07:50.339839Z","time spent":"384.172151ms","remote":"127.0.0.1:50614","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1101,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:567 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-02-03T12:08:07.781112Z","caller":"traceutil/trace.go:171","msg":"trace[2070765436] linearizableReadLoop","detail":"{readStateIndex:667; appliedIndex:666; }","duration":"221.563702ms","start":"2025-02-03T12:08:07.559529Z","end":"2025-02-03T12:08:07.781093Z","steps":["trace[2070765436] 'read index received'  (duration: 163.474296ms)","trace[2070765436] 'applied index is now lower than readState.Index'  (duration: 58.088506ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-03T12:08:07.781222Z","caller":"traceutil/trace.go:171","msg":"trace[771474905] transaction","detail":"{read_only:false; response_revision:619; number_of_response:1; }","duration":"258.991269ms","start":"2025-02-03T12:08:07.522221Z","end":"2025-02-03T12:08:07.781212Z","steps":["trace[771474905] 'process raft request'  (duration: 200.827763ms)","trace[771474905] 'compare'  (duration: 57.877305ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-03T12:08:07.781801Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.259103ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-03T12:08:07.782770Z","caller":"traceutil/trace.go:171","msg":"trace[1744663269] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:619; }","duration":"223.235505ms","start":"2025-02-03T12:08:07.559524Z","end":"2025-02-03T12:08:07.782759Z","steps":["trace[1744663269] 'agreement among raft nodes before linearized reading'  (duration: 222.248803ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-03T12:08:08.021141Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.474698ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-749300-m02\" limit:1 ","response":"range_response_count:1 size:3146"}
	{"level":"info","ts":"2025-02-03T12:08:08.021213Z","caller":"traceutil/trace.go:171","msg":"trace[521691111] range","detail":"{range_begin:/registry/minions/multinode-749300-m02; range_end:; response_count:1; response_revision:619; }","duration":"109.587598ms","start":"2025-02-03T12:08:07.911613Z","end":"2025-02-03T12:08:08.021201Z","steps":["trace[521691111] 'range keys from in-memory index tree'  (duration: 109.329398ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:09:39 up 6 min,  0 users,  load average: 0.04, 0.22, 0.14
	Linux multinode-749300 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [fab2d9be6b5c] <==
	I0203 12:08:39.487445       1 main.go:301] handling current node
	I0203 12:08:49.486014       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:08:49.486062       1 main.go:301] handling current node
	I0203 12:08:49.486081       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:08:49.486089       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:08:59.479160       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:08:59.479277       1 main.go:301] handling current node
	I0203 12:08:59.479383       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:08:59.479581       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:09:09.479647       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:09:09.479805       1 main.go:301] handling current node
	I0203 12:09:09.479902       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:09:09.480067       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:09:19.485040       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:09:19.485149       1 main.go:301] handling current node
	I0203 12:09:19.485171       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:09:19.485179       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:09:29.479250       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:09:29.479525       1 main.go:301] handling current node
	I0203 12:09:29.479752       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:09:29.480034       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:09:39.479044       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:09:39.479148       1 main.go:301] handling current node
	I0203 12:09:39.479168       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:09:39.479177       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [e3efb81aa459] <==
	I0203 12:04:53.011039       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0203 12:04:53.020163       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0203 12:04:53.020349       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0203 12:04:54.286211       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0203 12:04:54.376714       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0203 12:04:54.525157       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0203 12:04:54.548542       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.25.1.53]
	I0203 12:04:54.549794       1 controller.go:615] quota admission added evaluator for: endpoints
	I0203 12:04:54.560201       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0203 12:04:55.082990       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0203 12:04:55.426273       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0203 12:04:55.464182       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0203 12:04:55.502429       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0203 12:05:00.466310       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0203 12:05:00.550610       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0203 12:08:56.543278       1 conn.go:339] Error on socket receive: read tcp 172.25.1.53:8443->172.25.0.1:59331: use of closed network connection
	E0203 12:08:56.987961       1 conn.go:339] Error on socket receive: read tcp 172.25.1.53:8443->172.25.0.1:59334: use of closed network connection
	E0203 12:08:57.505914       1 conn.go:339] Error on socket receive: read tcp 172.25.1.53:8443->172.25.0.1:59336: use of closed network connection
	E0203 12:08:57.962683       1 conn.go:339] Error on socket receive: read tcp 172.25.1.53:8443->172.25.0.1:59338: use of closed network connection
	E0203 12:08:58.399779       1 conn.go:339] Error on socket receive: read tcp 172.25.1.53:8443->172.25.0.1:59340: use of closed network connection
	E0203 12:08:58.846945       1 conn.go:339] Error on socket receive: read tcp 172.25.1.53:8443->172.25.0.1:59342: use of closed network connection
	E0203 12:08:59.628639       1 conn.go:339] Error on socket receive: read tcp 172.25.1.53:8443->172.25.0.1:59345: use of closed network connection
	E0203 12:09:10.065811       1 conn.go:339] Error on socket receive: read tcp 172.25.1.53:8443->172.25.0.1:59347: use of closed network connection
	E0203 12:09:10.478131       1 conn.go:339] Error on socket receive: read tcp 172.25.1.53:8443->172.25.0.1:59349: use of closed network connection
	E0203 12:09:20.914050       1 conn.go:339] Error on socket receive: read tcp 172.25.1.53:8443->172.25.0.1:59352: use of closed network connection
	
	
	==> kube-controller-manager [8ade10c0fb09] <==
	I0203 12:07:57.214096       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:07:57.214387       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:07:57.243166       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:07:57.578919       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:07:58.163164       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:07:59.655130       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m02"
	I0203 12:07:59.772999       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:08:07.534314       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:08:26.797682       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:08:26.797755       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:08:26.813836       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:08:28.192212       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:08:29.680135       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:08:30.702586       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:08:51.029918       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="72.629315ms"
	I0203 12:08:51.048475       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="16.732326ms"
	I0203 12:08:51.049169       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="396.601µs"
	I0203 12:08:51.058159       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="35.9µs"
	I0203 12:08:51.069790       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="40.1µs"
	I0203 12:08:53.787260       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="12.580521ms"
	I0203 12:08:53.787659       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="70.201µs"
	I0203 12:08:53.939078       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="12.55302ms"
	I0203 12:08:53.939506       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="33.801µs"
	I0203 12:08:58.516195       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:09:01.710207       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	
	
	==> kube-proxy [c6dc514e98f6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0203 12:05:01.805329       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0203 12:05:01.822582       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.1.53"]
	E0203 12:05:01.822737       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0203 12:05:01.878001       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0203 12:05:01.878049       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0203 12:05:01.878079       1 server_linux.go:170] "Using iptables Proxier"
	I0203 12:05:01.883741       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0203 12:05:01.884139       1 server.go:497] "Version info" version="v1.32.1"
	I0203 12:05:01.884172       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:05:01.886194       1 config.go:199] "Starting service config controller"
	I0203 12:05:01.886246       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0203 12:05:01.886272       1 config.go:105] "Starting endpoint slice config controller"
	I0203 12:05:01.886277       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0203 12:05:01.886976       1 config.go:329] "Starting node config controller"
	I0203 12:05:01.887004       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0203 12:05:01.987328       1 shared_informer.go:320] Caches are synced for node config
	I0203 12:05:01.987379       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0203 12:05:01.987536       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [88c40ca9aa3c] <==
	W0203 12:04:53.247439       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0203 12:04:53.247628       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0203 12:04:53.427203       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0203 12:04:53.427543       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 12:04:53.471735       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0203 12:04:53.471980       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0203 12:04:53.482216       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0203 12:04:53.482267       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 12:04:53.497579       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0203 12:04:53.497628       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 12:04:53.544588       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0203 12:04:53.545097       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0203 12:04:53.614992       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0203 12:04:53.615323       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0203 12:04:53.655102       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0203 12:04:53.655499       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 12:04:53.655303       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0203 12:04:53.656094       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0203 12:04:53.713710       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0203 12:04:53.713767       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0203 12:04:53.764352       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0203 12:04:53.764706       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 12:04:53.799751       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0203 12:04:53.800034       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:04:56.288855       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 03 12:05:23 multinode-749300 kubelet[2285]: I0203 12:05:23.167986    2285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-v2gkp" podStartSLOduration=23.16796478 podStartE2EDuration="23.16796478s" podCreationTimestamp="2025-02-03 12:05:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-03 12:05:23.166450668 +0000 UTC m=+27.910161919" watchObservedRunningTime="2025-02-03 12:05:23.16796478 +0000 UTC m=+27.911676031"
	Feb 03 12:05:23 multinode-749300 kubelet[2285]: I0203 12:05:23.168186    2285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.168174481 podStartE2EDuration="16.168174481s" podCreationTimestamp="2025-02-03 12:05:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-03 12:05:23.135316733 +0000 UTC m=+27.879027984" watchObservedRunningTime="2025-02-03 12:05:23.168174481 +0000 UTC m=+27.911885732"
	Feb 03 12:05:55 multinode-749300 kubelet[2285]: E0203 12:05:55.578133    2285 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 03 12:05:55 multinode-749300 kubelet[2285]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 03 12:05:55 multinode-749300 kubelet[2285]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 03 12:05:55 multinode-749300 kubelet[2285]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 03 12:05:55 multinode-749300 kubelet[2285]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 03 12:06:55 multinode-749300 kubelet[2285]: E0203 12:06:55.581150    2285 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 03 12:06:55 multinode-749300 kubelet[2285]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 03 12:06:55 multinode-749300 kubelet[2285]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 03 12:06:55 multinode-749300 kubelet[2285]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 03 12:06:55 multinode-749300 kubelet[2285]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 03 12:07:55 multinode-749300 kubelet[2285]: E0203 12:07:55.582452    2285 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 03 12:07:55 multinode-749300 kubelet[2285]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 03 12:07:55 multinode-749300 kubelet[2285]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 03 12:07:55 multinode-749300 kubelet[2285]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 03 12:07:55 multinode-749300 kubelet[2285]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 03 12:08:51 multinode-749300 kubelet[2285]: I0203 12:08:51.070808    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m664r\" (UniqueName: \"kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r\") pod \"busybox-58667487b6-zgvmd\" (UID: \"5d672e4b-d76f-474b-ab97-487b532b6140\") " pod="default/busybox-58667487b6-zgvmd"
	Feb 03 12:08:53 multinode-749300 kubelet[2285]: I0203 12:08:53.924771    2285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-58667487b6-zgvmd" podStartSLOduration=2.490608023 podStartE2EDuration="3.924751348s" podCreationTimestamp="2025-02-03 12:08:50 +0000 UTC" firstStartedPulling="2025-02-03 12:08:51.797442199 +0000 UTC m=+236.541153350" lastFinishedPulling="2025-02-03 12:08:53.231585424 +0000 UTC m=+237.975296675" observedRunningTime="2025-02-03 12:08:53.924447045 +0000 UTC m=+238.668158296" watchObservedRunningTime="2025-02-03 12:08:53.924751348 +0000 UTC m=+238.668462599"
	Feb 03 12:08:55 multinode-749300 kubelet[2285]: E0203 12:08:55.579802    2285 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 03 12:08:55 multinode-749300 kubelet[2285]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 03 12:08:55 multinode-749300 kubelet[2285]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 03 12:08:55 multinode-749300 kubelet[2285]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 03 12:08:55 multinode-749300 kubelet[2285]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 03 12:08:58 multinode-749300 kubelet[2285]: E0203 12:08:58.847616    2285 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:56228->127.0.0.1:34287: write tcp 127.0.0.1:56228->127.0.0.1:34287: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-749300 -n multinode-749300
E0203 12:09:48.961477    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-749300 -n multinode-749300: (11.1681372s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-749300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (53.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (544.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-749300
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-749300
E0203 12:24:32.061275    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 12:24:48.972253    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-749300: (1m35.4481951s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-749300 --wait=true -v=8 --alsologtostderr
E0203 12:25:25.142959    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 12:28:28.229654    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 12:29:48.975317    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 12:30:25.145795    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-749300 --wait=true -v=8 --alsologtostderr: exit status 1 (6m37.5179594s)

                                                
                                                
-- stdout --
	* [multinode-749300] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20354
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-749300" primary control-plane node in "multinode-749300" cluster
	* Restarting existing hyperv VM for "multinode-749300" ...
	* Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-749300-m02" worker node in "multinode-749300" cluster
	* Restarting existing hyperv VM for "multinode-749300-m02" ...
	* Found network options:
	  - NO_PROXY=172.25.12.244
	  - NO_PROXY=172.25.12.244
	* Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	  - env NO_PROXY=172.25.12.244
	* Verifying Kubernetes components...
	
	* Starting "multinode-749300-m03" worker node in "multinode-749300" cluster
	* Restarting existing hyperv VM for "multinode-749300-m03" ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 12:25:23.595911   13136 out.go:345] Setting OutFile to fd 1416 ...
	I0203 12:25:23.651904   13136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 12:25:23.651904   13136 out.go:358] Setting ErrFile to fd 1980...
	I0203 12:25:23.651904   13136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 12:25:23.670894   13136 out.go:352] Setting JSON to false
	I0203 12:25:23.672902   13136 start.go:129] hostinfo: {"hostname":"minikube5","uptime":170124,"bootTime":1738415398,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5371 Build 19045.5371","kernelVersion":"10.0.19045.5371 Build 19045.5371","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0203 12:25:23.672902   13136 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0203 12:25:23.760924   13136 out.go:177] * [multinode-749300] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	I0203 12:25:23.767076   13136 notify.go:220] Checking for updates...
	I0203 12:25:23.770973   13136 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 12:25:23.873617   13136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 12:25:23.940774   13136 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0203 12:25:24.015441   13136 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 12:25:24.031789   13136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 12:25:24.052654   13136 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:25:24.053221   13136 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 12:25:29.107637   13136 out.go:177] * Using the hyperv driver based on existing profile
	I0203 12:25:29.216091   13136 start.go:297] selected driver: hyperv
	I0203 12:25:29.216091   13136 start.go:901] validating driver "hyperv" against &{Name:multinode-749300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-749300 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.1.53 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.8.35 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.0.54 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio
:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 12:25:29.216454   13136 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 12:25:29.260672   13136 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 12:25:29.261690   13136 cni.go:84] Creating CNI manager for ""
	I0203 12:25:29.261690   13136 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0203 12:25:29.261690   13136 start.go:340] cluster config:
	{Name:multinode-749300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-749300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.1.53 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.8.35 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.0.54 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false lo
gviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 12:25:29.262214   13136 iso.go:125] acquiring lock: {Name:mkae681ee414e9275e9685c6bbf5080b17ead976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 12:25:29.407965   13136 out.go:177] * Starting "multinode-749300" primary control-plane node in "multinode-749300" cluster
	I0203 12:25:29.509319   13136 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 12:25:29.509319   13136 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0203 12:25:29.509319   13136 cache.go:56] Caching tarball of preloaded images
	I0203 12:25:29.511102   13136 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 12:25:29.511305   13136 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0203 12:25:29.511305   13136 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\config.json ...
	I0203 12:25:29.513431   13136 start.go:360] acquireMachinesLock for multinode-749300: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 12:25:29.513431   13136 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-749300"
	I0203 12:25:29.513431   13136 start.go:96] Skipping create...Using existing machine configuration
	I0203 12:25:29.513431   13136 fix.go:54] fixHost starting: 
	I0203 12:25:29.514215   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:25:32.063523   13136 main.go:141] libmachine: [stdout =====>] : Off
	
	I0203 12:25:32.063866   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:25:32.063904   13136 fix.go:112] recreateIfNeeded on multinode-749300: state=Stopped err=<nil>
	W0203 12:25:32.063904   13136 fix.go:138] unexpected machine state, will restart: <nil>
	I0203 12:25:32.157190   13136 out.go:177] * Restarting existing hyperv VM for "multinode-749300" ...
	I0203 12:25:32.214010   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-749300
	I0203 12:25:35.136044   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:25:35.136044   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:25:35.136044   13136 main.go:141] libmachine: Waiting for host to start...
	I0203 12:25:35.136139   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:25:37.183933   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:25:37.183933   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:25:37.184023   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:25:39.503151   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:25:39.503852   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:25:40.504190   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:25:42.499934   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:25:42.500667   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:25:42.500667   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:25:44.806728   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:25:44.806728   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:25:45.807252   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:25:47.833064   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:25:47.833064   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:25:47.834011   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:25:50.147084   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:25:50.147632   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:25:51.148776   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:25:53.166288   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:25:53.166288   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:25:53.166411   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:25:55.443296   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:25:55.443399   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:25:56.444219   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:25:58.447898   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:25:58.447898   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:25:58.448426   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:00.828501   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:00.828594   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:00.830557   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:02.816755   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:02.816841   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:02.816902   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:05.136442   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:05.137211   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:05.137465   13136 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\config.json ...
	I0203 12:26:05.140035   13136 machine.go:93] provisionDockerMachine start ...
	I0203 12:26:05.140212   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:07.066165   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:07.066165   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:07.066272   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:09.383612   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:09.383766   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:09.386904   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:26:09.387570   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.244 22 <nil> <nil>}
	I0203 12:26:09.387570   13136 main.go:141] libmachine: About to run SSH command:
	hostname
	I0203 12:26:09.521739   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0203 12:26:09.521851   13136 buildroot.go:166] provisioning hostname "multinode-749300"
	I0203 12:26:09.521851   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:11.482052   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:11.482052   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:11.482237   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:13.839571   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:13.839571   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:13.846444   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:26:13.846713   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.244 22 <nil> <nil>}
	I0203 12:26:13.846713   13136 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-749300 && echo "multinode-749300" | sudo tee /etc/hostname
	I0203 12:26:13.995994   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-749300
	
	I0203 12:26:13.996102   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:15.938221   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:15.938221   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:15.938319   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:18.288139   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:18.288139   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:18.292035   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:26:18.293062   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.244 22 <nil> <nil>}
	I0203 12:26:18.293062   13136 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-749300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-749300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-749300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 12:26:18.442137   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 12:26:18.442137   13136 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0203 12:26:18.442137   13136 buildroot.go:174] setting up certificates
	I0203 12:26:18.442137   13136 provision.go:84] configureAuth start
	I0203 12:26:18.442137   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:20.426042   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:20.426445   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:20.426445   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:22.761972   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:22.761972   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:22.762679   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:24.725930   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:24.725930   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:24.726188   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:27.054617   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:27.054789   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:27.054866   13136 provision.go:143] copyHostCerts
	I0203 12:26:27.055169   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0203 12:26:27.055169   13136 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0203 12:26:27.055169   13136 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0203 12:26:27.055847   13136 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0203 12:26:27.056446   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0203 12:26:27.057065   13136 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0203 12:26:27.057065   13136 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0203 12:26:27.057065   13136 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0203 12:26:27.057733   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0203 12:26:27.058335   13136 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0203 12:26:27.058335   13136 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0203 12:26:27.058335   13136 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0203 12:26:27.059022   13136 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-749300 san=[127.0.0.1 172.25.12.244 localhost minikube multinode-749300]
	I0203 12:26:27.155879   13136 provision.go:177] copyRemoteCerts
	I0203 12:26:27.162885   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 12:26:27.162885   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:29.103378   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:29.103500   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:29.103500   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:31.431437   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:31.431997   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:31.432322   13136 sshutil.go:53] new ssh client: &{IP:172.25.12.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\id_rsa Username:docker}
	I0203 12:26:31.534958   13136 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3719323s)
	I0203 12:26:31.535037   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0203 12:26:31.535037   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0203 12:26:31.577184   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0203 12:26:31.577591   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0203 12:26:31.624893   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0203 12:26:31.625898   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0203 12:26:31.671371   13136 provision.go:87] duration metric: took 13.2290459s to configureAuth
	I0203 12:26:31.671438   13136 buildroot.go:189] setting minikube options for container-runtime
	I0203 12:26:31.671529   13136 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:26:31.672100   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:33.622749   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:33.622749   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:33.622979   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:35.942649   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:35.942649   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:35.946807   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:26:35.947332   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.244 22 <nil> <nil>}
	I0203 12:26:35.947523   13136 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 12:26:36.084716   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0203 12:26:36.084716   13136 buildroot.go:70] root file system type: tmpfs
	I0203 12:26:36.085014   13136 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 12:26:36.085122   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:38.055389   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:38.055895   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:38.055994   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:40.377952   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:40.378488   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:40.383190   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:26:40.383274   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.244 22 <nil> <nil>}
	I0203 12:26:40.383274   13136 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 12:26:40.538448   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 12:26:40.538705   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:42.452503   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:42.452535   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:42.452602   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:44.786441   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:44.786441   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:44.791468   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:26:44.791602   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.244 22 <nil> <nil>}
	I0203 12:26:44.791602   13136 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 12:26:47.267100   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0203 12:26:47.267100   13136 machine.go:96] duration metric: took 42.1265368s to provisionDockerMachine
	I0203 12:26:47.267100   13136 start.go:293] postStartSetup for "multinode-749300" (driver="hyperv")
	I0203 12:26:47.267100   13136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 12:26:47.275516   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 12:26:47.275516   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:49.222983   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:49.222983   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:49.223539   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:51.572945   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:51.573664   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:51.574034   13136 sshutil.go:53] new ssh client: &{IP:172.25.12.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\id_rsa Username:docker}
	I0203 12:26:51.683153   13136 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4075313s)
	I0203 12:26:51.692286   13136 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 12:26:51.699033   13136 command_runner.go:130] > NAME=Buildroot
	I0203 12:26:51.699127   13136 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0203 12:26:51.699127   13136 command_runner.go:130] > ID=buildroot
	I0203 12:26:51.699127   13136 command_runner.go:130] > VERSION_ID=2023.02.9
	I0203 12:26:51.699127   13136 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0203 12:26:51.699308   13136 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 12:26:51.699335   13136 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0203 12:26:51.699771   13136 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0203 12:26:51.700523   13136 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> 54522.pem in /etc/ssl/certs
	I0203 12:26:51.700594   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /etc/ssl/certs/54522.pem
	I0203 12:26:51.709030   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 12:26:51.726362   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /etc/ssl/certs/54522.pem (1708 bytes)
	I0203 12:26:51.769796   13136 start.go:296] duration metric: took 4.5026457s for postStartSetup
	I0203 12:26:51.769933   13136 fix.go:56] duration metric: took 1m22.2555815s for fixHost
	I0203 12:26:51.770070   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:53.724415   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:53.724415   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:53.724415   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:56.093685   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:56.093685   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:56.098017   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:26:56.098630   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.244 22 <nil> <nil>}
	I0203 12:26:56.098630   13136 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0203 12:26:56.231749   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738585616.246550531
	
	I0203 12:26:56.231749   13136 fix.go:216] guest clock: 1738585616.246550531
	I0203 12:26:56.231880   13136 fix.go:229] Guest: 2025-02-03 12:26:56.246550531 +0000 UTC Remote: 2025-02-03 12:26:51.7699333 +0000 UTC m=+88.266606101 (delta=4.476617231s)
	I0203 12:26:56.231880   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:58.176940   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:58.176940   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:58.176940   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:27:00.531615   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:27:00.531896   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:27:00.536034   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:27:00.536034   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.244 22 <nil> <nil>}
	I0203 12:27:00.536034   13136 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1738585616
	I0203 12:27:00.674546   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb  3 12:26:56 UTC 2025
	
	I0203 12:27:00.674546   13136 fix.go:236] clock set: Mon Feb  3 12:26:56 UTC 2025
	 (err=<nil>)
	I0203 12:27:00.674546   13136 start.go:83] releasing machines lock for "multinode-749300", held for 1m31.1600955s
	I0203 12:27:00.674546   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:27:02.673223   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:27:02.673223   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:27:02.673766   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:27:04.996525   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:27:04.996839   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:27:05.001161   13136 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0203 12:27:05.001308   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:27:05.009576   13136 ssh_runner.go:195] Run: cat /version.json
	I0203 12:27:05.009639   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:27:07.023280   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:27:07.023280   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:27:07.023382   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:27:07.028962   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:27:07.028962   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:27:07.028962   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:27:09.444979   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:27:09.444979   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:27:09.446032   13136 sshutil.go:53] new ssh client: &{IP:172.25.12.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\id_rsa Username:docker}
	I0203 12:27:09.467324   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:27:09.467324   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:27:09.467816   13136 sshutil.go:53] new ssh client: &{IP:172.25.12.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\id_rsa Username:docker}
	I0203 12:27:09.541099   13136 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0203 12:27:09.541597   13136 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.5403845s)
	W0203 12:27:09.541788   13136 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0203 12:27:09.557958   13136 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0203 12:27:09.557958   13136 ssh_runner.go:235] Completed: cat /version.json: (4.5483313s)
	I0203 12:27:09.565228   13136 ssh_runner.go:195] Run: systemctl --version
	I0203 12:27:09.573515   13136 command_runner.go:130] > systemd 252 (252)
	I0203 12:27:09.574580   13136 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0203 12:27:09.581880   13136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0203 12:27:09.590556   13136 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0203 12:27:09.590556   13136 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 12:27:09.598628   13136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 12:27:09.626887   13136 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0203 12:27:09.627009   13136 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0203 12:27:09.627009   13136 start.go:495] detecting cgroup driver to use...
	I0203 12:27:09.627157   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 12:27:09.660074   13136 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0203 12:27:09.668815   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0203 12:27:09.694919   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0203 12:27:09.706635   13136 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0203 12:27:09.706635   13136 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0203 12:27:09.718483   13136 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 12:27:09.726452   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0203 12:27:09.753426   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 12:27:09.779099   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 12:27:09.807690   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 12:27:09.835424   13136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 12:27:09.864173   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 12:27:09.891574   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0203 12:27:09.920820   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0203 12:27:09.949733   13136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 12:27:09.966097   13136 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 12:27:09.967149   13136 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 12:27:09.976000   13136 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0203 12:27:10.005668   13136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 12:27:10.032858   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:27:10.234944   13136 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 12:27:10.265837   13136 start.go:495] detecting cgroup driver to use...
	I0203 12:27:10.274238   13136 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 12:27:10.294157   13136 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0203 12:27:10.294157   13136 command_runner.go:130] > [Unit]
	I0203 12:27:10.294157   13136 command_runner.go:130] > Description=Docker Application Container Engine
	I0203 12:27:10.294157   13136 command_runner.go:130] > Documentation=https://docs.docker.com
	I0203 12:27:10.294157   13136 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0203 12:27:10.294157   13136 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0203 12:27:10.294157   13136 command_runner.go:130] > StartLimitBurst=3
	I0203 12:27:10.294157   13136 command_runner.go:130] > StartLimitIntervalSec=60
	I0203 12:27:10.294157   13136 command_runner.go:130] > [Service]
	I0203 12:27:10.294157   13136 command_runner.go:130] > Type=notify
	I0203 12:27:10.294157   13136 command_runner.go:130] > Restart=on-failure
	I0203 12:27:10.294157   13136 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0203 12:27:10.294157   13136 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0203 12:27:10.294157   13136 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0203 12:27:10.294157   13136 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0203 12:27:10.294157   13136 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0203 12:27:10.294157   13136 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0203 12:27:10.294157   13136 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0203 12:27:10.294157   13136 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0203 12:27:10.294157   13136 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0203 12:27:10.294157   13136 command_runner.go:130] > ExecStart=
	I0203 12:27:10.294157   13136 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0203 12:27:10.294157   13136 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0203 12:27:10.294157   13136 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0203 12:27:10.294157   13136 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0203 12:27:10.294157   13136 command_runner.go:130] > LimitNOFILE=infinity
	I0203 12:27:10.294685   13136 command_runner.go:130] > LimitNPROC=infinity
	I0203 12:27:10.294685   13136 command_runner.go:130] > LimitCORE=infinity
	I0203 12:27:10.294685   13136 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0203 12:27:10.294731   13136 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0203 12:27:10.294731   13136 command_runner.go:130] > TasksMax=infinity
	I0203 12:27:10.294731   13136 command_runner.go:130] > TimeoutStartSec=0
	I0203 12:27:10.294782   13136 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0203 12:27:10.294782   13136 command_runner.go:130] > Delegate=yes
	I0203 12:27:10.294825   13136 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0203 12:27:10.294825   13136 command_runner.go:130] > KillMode=process
	I0203 12:27:10.294863   13136 command_runner.go:130] > [Install]
	I0203 12:27:10.294863   13136 command_runner.go:130] > WantedBy=multi-user.target
	I0203 12:27:10.303563   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 12:27:10.335505   13136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 12:27:10.377146   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 12:27:10.409002   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 12:27:10.441022   13136 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0203 12:27:10.499742   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 12:27:10.524703   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 12:27:10.559564   13136 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0203 12:27:10.568258   13136 ssh_runner.go:195] Run: which cri-dockerd
	I0203 12:27:10.575372   13136 command_runner.go:130] > /usr/bin/cri-dockerd
	I0203 12:27:10.584155   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0203 12:27:10.601708   13136 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0203 12:27:10.641190   13136 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 12:27:10.835390   13136 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 12:27:11.018343   13136 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 12:27:11.018560   13136 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0203 12:27:11.057570   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:27:11.257278   13136 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 12:27:13.957023   13136 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6997143s)
	I0203 12:27:13.965163   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0203 12:27:13.996412   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 12:27:14.027729   13136 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0203 12:27:14.224705   13136 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 12:27:14.423681   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:27:14.616531   13136 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0203 12:27:14.654124   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 12:27:14.685448   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:27:14.863656   13136 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0203 12:27:14.963201   13136 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0203 12:27:14.973423   13136 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0203 12:27:14.981755   13136 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0203 12:27:14.981826   13136 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0203 12:27:14.981826   13136 command_runner.go:130] > Device: 0,22	Inode: 860         Links: 1
	I0203 12:27:14.981826   13136 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0203 12:27:14.981826   13136 command_runner.go:130] > Access: 2025-02-03 12:27:14.903146812 +0000
	I0203 12:27:14.981826   13136 command_runner.go:130] > Modify: 2025-02-03 12:27:14.903146812 +0000
	I0203 12:27:14.982024   13136 command_runner.go:130] > Change: 2025-02-03 12:27:14.906146829 +0000
	I0203 12:27:14.982024   13136 command_runner.go:130] >  Birth: -
	I0203 12:27:14.982024   13136 start.go:563] Will wait 60s for crictl version
	I0203 12:27:14.991108   13136 ssh_runner.go:195] Run: which crictl
	I0203 12:27:14.997233   13136 command_runner.go:130] > /usr/bin/crictl
	I0203 12:27:15.004269   13136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 12:27:15.058678   13136 command_runner.go:130] > Version:  0.1.0
	I0203 12:27:15.058678   13136 command_runner.go:130] > RuntimeName:  docker
	I0203 12:27:15.058678   13136 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0203 12:27:15.058780   13136 command_runner.go:130] > RuntimeApiVersion:  v1
	I0203 12:27:15.058780   13136 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0203 12:27:15.065303   13136 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 12:27:15.097960   13136 command_runner.go:130] > 27.4.0
	I0203 12:27:15.107089   13136 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 12:27:15.136957   13136 command_runner.go:130] > 27.4.0
	I0203 12:27:15.142877   13136 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0203 12:27:15.142877   13136 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0203 12:27:15.147513   13136 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0203 12:27:15.147513   13136 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0203 12:27:15.147557   13136 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0203 12:27:15.147557   13136 ip.go:211] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:37:32:ac Flags:up|broadcast|multicast|running}
	I0203 12:27:15.149790   13136 ip.go:214] interface addr: fe80::c77d:5c4b:3bd9:9577/64
	I0203 12:27:15.149790   13136 ip.go:214] interface addr: 172.25.0.1/20
	I0203 12:27:15.157169   13136 ssh_runner.go:195] Run: grep 172.25.0.1	host.minikube.internal$ /etc/hosts
	I0203 12:27:15.164236   13136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 12:27:15.185323   13136 kubeadm.go:883] updating cluster {Name:multinode-749300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-749300 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.12.244 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.8.35 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.0.54 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false is
tio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0203 12:27:15.185611   13136 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 12:27:15.192086   13136 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 12:27:15.218589   13136 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.1
	I0203 12:27:15.218589   13136 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.1
	I0203 12:27:15.218693   13136 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.1
	I0203 12:27:15.218693   13136 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.1
	I0203 12:27:15.218693   13136 command_runner.go:130] > kindest/kindnetd:v20241212-9f82dd49
	I0203 12:27:15.218693   13136 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0203 12:27:15.218693   13136 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0203 12:27:15.218693   13136 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0203 12:27:15.218693   13136 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 12:27:15.218693   13136 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0203 12:27:15.218693   13136 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	kindest/kindnetd:v20241212-9f82dd49
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0203 12:27:15.218812   13136 docker.go:619] Images already preloaded, skipping extraction
	I0203 12:27:15.225500   13136 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 12:27:15.251063   13136 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.1
	I0203 12:27:15.251063   13136 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.1
	I0203 12:27:15.251063   13136 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.1
	I0203 12:27:15.251063   13136 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.1
	I0203 12:27:15.251063   13136 command_runner.go:130] > kindest/kindnetd:v20241212-9f82dd49
	I0203 12:27:15.251063   13136 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0203 12:27:15.251063   13136 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0203 12:27:15.251063   13136 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0203 12:27:15.251063   13136 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 12:27:15.251063   13136 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0203 12:27:15.251063   13136 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	kindest/kindnetd:v20241212-9f82dd49
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0203 12:27:15.251063   13136 cache_images.go:84] Images are preloaded, skipping loading
	I0203 12:27:15.251063   13136 kubeadm.go:934] updating node { 172.25.12.244 8443 v1.32.1 docker true true} ...
	I0203 12:27:15.251063   13136 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-749300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.12.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:multinode-749300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0203 12:27:15.258573   13136 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0203 12:27:15.323844   13136 command_runner.go:130] > cgroupfs
	I0203 12:27:15.324015   13136 cni.go:84] Creating CNI manager for ""
	I0203 12:27:15.324015   13136 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0203 12:27:15.324015   13136 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0203 12:27:15.324096   13136 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.12.244 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-749300 NodeName:multinode-749300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.12.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.12.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0203 12:27:15.324276   13136 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.12.244
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-749300"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.25.12.244"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.12.244"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 12:27:15.332063   13136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0203 12:27:15.352823   13136 command_runner.go:130] > kubeadm
	I0203 12:27:15.352823   13136 command_runner.go:130] > kubectl
	I0203 12:27:15.352823   13136 command_runner.go:130] > kubelet
	I0203 12:27:15.352823   13136 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 12:27:15.361623   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 12:27:15.382454   13136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0203 12:27:15.412334   13136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 12:27:15.446820   13136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I0203 12:27:15.487016   13136 ssh_runner.go:195] Run: grep 172.25.12.244	control-plane.minikube.internal$ /etc/hosts
	I0203 12:27:15.493655   13136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.12.244	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 12:27:15.523216   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:27:15.725295   13136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 12:27:15.753811   13136 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300 for IP: 172.25.12.244
	I0203 12:27:15.753867   13136 certs.go:194] generating shared ca certs ...
	I0203 12:27:15.753927   13136 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:27:15.754660   13136 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0203 12:27:15.755081   13136 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0203 12:27:15.755081   13136 certs.go:256] generating profile certs ...
	I0203 12:27:15.755748   13136 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\client.key
	I0203 12:27:15.755858   13136 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.key.a6060888
	I0203 12:27:15.755970   13136 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.crt.a6060888 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.12.244]
	I0203 12:27:16.073923   13136 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.crt.a6060888 ...
	I0203 12:27:16.073923   13136 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.crt.a6060888: {Name:mk40fb8c78e9cf744fa3088bb81814742e8351f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:27:16.075688   13136 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.key.a6060888 ...
	I0203 12:27:16.075688   13136 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.key.a6060888: {Name:mkcd8cc8fae2982ff1b1aaeea5284f71e52afe02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:27:16.076940   13136 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.crt.a6060888 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.crt
	I0203 12:27:16.090519   13136 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.key.a6060888 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.key
	I0203 12:27:16.091518   13136 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\proxy-client.key
	I0203 12:27:16.091518   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0203 12:27:16.092524   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0203 12:27:16.092524   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0203 12:27:16.092524   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0203 12:27:16.092524   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0203 12:27:16.092524   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0203 12:27:16.093541   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0203 12:27:16.093541   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0203 12:27:16.093541   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem (1338 bytes)
	W0203 12:27:16.093541   13136 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452_empty.pem, impossibly tiny 0 bytes
	I0203 12:27:16.093541   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0203 12:27:16.094520   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0203 12:27:16.094520   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0203 12:27:16.094520   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0203 12:27:16.094520   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem (1708 bytes)
	I0203 12:27:16.095519   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /usr/share/ca-certificates/54522.pem
	I0203 12:27:16.095519   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:27:16.095519   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem -> /usr/share/ca-certificates/5452.pem
	I0203 12:27:16.096518   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 12:27:16.141373   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0203 12:27:16.185367   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 12:27:16.230961   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0203 12:27:16.276910   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0203 12:27:16.326911   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0203 12:27:16.373019   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 12:27:16.417192   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0203 12:27:16.462519   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /usr/share/ca-certificates/54522.pem (1708 bytes)
	I0203 12:27:16.510893   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 12:27:16.555132   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem --> /usr/share/ca-certificates/5452.pem (1338 bytes)
	I0203 12:27:16.601150   13136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 12:27:16.639899   13136 ssh_runner.go:195] Run: openssl version
	I0203 12:27:16.648676   13136 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0203 12:27:16.656369   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54522.pem && ln -fs /usr/share/ca-certificates/54522.pem /etc/ssl/certs/54522.pem"
	I0203 12:27:16.685198   13136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54522.pem
	I0203 12:27:16.692611   13136 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb  3 10:45 /usr/share/ca-certificates/54522.pem
	I0203 12:27:16.692611   13136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:45 /usr/share/ca-certificates/54522.pem
	I0203 12:27:16.700927   13136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54522.pem
	I0203 12:27:16.709793   13136 command_runner.go:130] > 3ec20f2e
	I0203 12:27:16.717616   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/54522.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 12:27:16.746453   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 12:27:16.771868   13136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:27:16.779706   13136 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb  3 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:27:16.780156   13136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:27:16.788709   13136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:27:16.797056   13136 command_runner.go:130] > b5213941
	I0203 12:27:16.804460   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 12:27:16.830489   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5452.pem && ln -fs /usr/share/ca-certificates/5452.pem /etc/ssl/certs/5452.pem"
	I0203 12:27:16.857958   13136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5452.pem
	I0203 12:27:16.864028   13136 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb  3 10:45 /usr/share/ca-certificates/5452.pem
	I0203 12:27:16.864028   13136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:45 /usr/share/ca-certificates/5452.pem
	I0203 12:27:16.872029   13136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5452.pem
	I0203 12:27:16.881272   13136 command_runner.go:130] > 51391683
	I0203 12:27:16.888830   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5452.pem /etc/ssl/certs/51391683.0"
	I0203 12:27:16.915968   13136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 12:27:16.923196   13136 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 12:27:16.923281   13136 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0203 12:27:16.923281   13136 command_runner.go:130] > Device: 8,1	Inode: 7336797     Links: 1
	I0203 12:27:16.923281   13136 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0203 12:27:16.923281   13136 command_runner.go:130] > Access: 2025-02-03 12:04:43.777432260 +0000
	I0203 12:27:16.923281   13136 command_runner.go:130] > Modify: 2025-02-03 12:04:43.777432260 +0000
	I0203 12:27:16.923281   13136 command_runner.go:130] > Change: 2025-02-03 12:04:43.777432260 +0000
	I0203 12:27:16.923345   13136 command_runner.go:130] >  Birth: 2025-02-03 12:04:43.777432260 +0000
	I0203 12:27:16.931115   13136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0203 12:27:16.941453   13136 command_runner.go:130] > Certificate will not expire
	I0203 12:27:16.949434   13136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0203 12:27:16.958581   13136 command_runner.go:130] > Certificate will not expire
	I0203 12:27:16.966784   13136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0203 12:27:16.976228   13136 command_runner.go:130] > Certificate will not expire
	I0203 12:27:16.983764   13136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0203 12:27:16.992433   13136 command_runner.go:130] > Certificate will not expire
	I0203 12:27:17.001413   13136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0203 12:27:17.010458   13136 command_runner.go:130] > Certificate will not expire
	I0203 12:27:17.018119   13136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0203 12:27:17.027219   13136 command_runner.go:130] > Certificate will not expire
	I0203 12:27:17.027493   13136 kubeadm.go:392] StartCluster: {Name:multinode-749300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-749300 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.12.244 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.8.35 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.0.54 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio
-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 12:27:17.033846   13136 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0203 12:27:17.068733   13136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 12:27:17.088115   13136 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0203 12:27:17.088115   13136 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0203 12:27:17.088115   13136 command_runner.go:130] > /var/lib/minikube/etcd:
	I0203 12:27:17.088115   13136 command_runner.go:130] > member
	I0203 12:27:17.088409   13136 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0203 12:27:17.088487   13136 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0203 12:27:17.096441   13136 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0203 12:27:17.114662   13136 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0203 12:27:17.115714   13136 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-749300" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 12:27:17.116281   13136 kubeconfig.go:62] C:\Users\jenkins.minikube5\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-749300" cluster setting kubeconfig missing "multinode-749300" context setting]
	I0203 12:27:17.116975   13136 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:27:17.132871   13136 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 12:27:17.134231   13136 kapi.go:59] client config for multinode-749300: &rest.Config{Host:"https://172.25.12.244:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-749300/client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-749300/client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x219e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 12:27:17.135337   13136 cert_rotation.go:140] Starting client certificate rotation controller
	I0203 12:27:17.143595   13136 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0203 12:27:17.163603   13136 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0203 12:27:17.163603   13136 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0203 12:27:17.163603   13136 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0203 12:27:17.163603   13136 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I0203 12:27:17.163603   13136 command_runner.go:130] >  kind: InitConfiguration
	I0203 12:27:17.163603   13136 command_runner.go:130] >  localAPIEndpoint:
	I0203 12:27:17.163603   13136 command_runner.go:130] > -  advertiseAddress: 172.25.1.53
	I0203 12:27:17.163603   13136 command_runner.go:130] > +  advertiseAddress: 172.25.12.244
	I0203 12:27:17.163603   13136 command_runner.go:130] >    bindPort: 8443
	I0203 12:27:17.163603   13136 command_runner.go:130] >  bootstrapTokens:
	I0203 12:27:17.163603   13136 command_runner.go:130] >    - groups:
	I0203 12:27:17.163603   13136 command_runner.go:130] > @@ -15,13 +15,13 @@
	I0203 12:27:17.163603   13136 command_runner.go:130] >    name: "multinode-749300"
	I0203 12:27:17.163603   13136 command_runner.go:130] >    kubeletExtraArgs:
	I0203 12:27:17.163603   13136 command_runner.go:130] >      - name: "node-ip"
	I0203 12:27:17.163603   13136 command_runner.go:130] > -      value: "172.25.1.53"
	I0203 12:27:17.163603   13136 command_runner.go:130] > +      value: "172.25.12.244"
	I0203 12:27:17.163603   13136 command_runner.go:130] >    taints: []
	I0203 12:27:17.163603   13136 command_runner.go:130] >  ---
	I0203 12:27:17.163603   13136 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I0203 12:27:17.163603   13136 command_runner.go:130] >  kind: ClusterConfiguration
	I0203 12:27:17.163603   13136 command_runner.go:130] >  apiServer:
	I0203 12:27:17.163603   13136 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.25.1.53"]
	I0203 12:27:17.163603   13136 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.25.12.244"]
	I0203 12:27:17.163603   13136 command_runner.go:130] >    extraArgs:
	I0203 12:27:17.163603   13136 command_runner.go:130] >      - name: "enable-admission-plugins"
	I0203 12:27:17.164346   13136 command_runner.go:130] >        value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0203 12:27:17.164346   13136 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.25.1.53
	+  advertiseAddress: 172.25.12.244
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -15,13 +15,13 @@
	   name: "multinode-749300"
	   kubeletExtraArgs:
	     - name: "node-ip"
	-      value: "172.25.1.53"
	+      value: "172.25.12.244"
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.25.1.53"]
	+  certSANs: ["127.0.0.1", "localhost", "172.25.12.244"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	       value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	
	-- /stdout --
	I0203 12:27:17.164346   13136 kubeadm.go:1160] stopping kube-system containers ...
	I0203 12:27:17.172115   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0203 12:27:17.202050   13136 command_runner.go:130] > fe91a8d012ae
	I0203 12:27:17.202050   13136 command_runner.go:130] > a6484d4fc4d7
	I0203 12:27:17.202050   13136 command_runner.go:130] > a166f3c8776d
	I0203 12:27:17.202050   13136 command_runner.go:130] > 26e5557dc32c
	I0203 12:27:17.202050   13136 command_runner.go:130] > fab2d9be6b5c
	I0203 12:27:17.202050   13136 command_runner.go:130] > c6dc514e98f6
	I0203 12:27:17.202050   13136 command_runner.go:130] > cb49b32ba085
	I0203 12:27:17.202050   13136 command_runner.go:130] > 1ff01fa7d8c6
	I0203 12:27:17.202050   13136 command_runner.go:130] > 8ade10c0fb09
	I0203 12:27:17.202050   13136 command_runner.go:130] > 88c40ca9aa3c
	I0203 12:27:17.202050   13136 command_runner.go:130] > ebc67da1b9e9
	I0203 12:27:17.202050   13136 command_runner.go:130] > e3efb81aa459
	I0203 12:27:17.202050   13136 command_runner.go:130] > b1b473818438
	I0203 12:27:17.202050   13136 command_runner.go:130] > d8d9e598659f
	I0203 12:27:17.202050   13136 command_runner.go:130] > 16d03cfd685d
	I0203 12:27:17.202050   13136 command_runner.go:130] > d3c93fcfaa46
	I0203 12:27:17.202050   13136 docker.go:483] Stopping containers: [fe91a8d012ae a6484d4fc4d7 a166f3c8776d 26e5557dc32c fab2d9be6b5c c6dc514e98f6 cb49b32ba085 1ff01fa7d8c6 8ade10c0fb09 88c40ca9aa3c ebc67da1b9e9 e3efb81aa459 b1b473818438 d8d9e598659f 16d03cfd685d d3c93fcfaa46]
	I0203 12:27:17.208947   13136 ssh_runner.go:195] Run: docker stop fe91a8d012ae a6484d4fc4d7 a166f3c8776d 26e5557dc32c fab2d9be6b5c c6dc514e98f6 cb49b32ba085 1ff01fa7d8c6 8ade10c0fb09 88c40ca9aa3c ebc67da1b9e9 e3efb81aa459 b1b473818438 d8d9e598659f 16d03cfd685d d3c93fcfaa46
	I0203 12:27:17.235967   13136 command_runner.go:130] > fe91a8d012ae
	I0203 12:27:17.235967   13136 command_runner.go:130] > a6484d4fc4d7
	I0203 12:27:17.235967   13136 command_runner.go:130] > a166f3c8776d
	I0203 12:27:17.236382   13136 command_runner.go:130] > 26e5557dc32c
	I0203 12:27:17.236881   13136 command_runner.go:130] > fab2d9be6b5c
	I0203 12:27:17.236881   13136 command_runner.go:130] > c6dc514e98f6
	I0203 12:27:17.236881   13136 command_runner.go:130] > cb49b32ba085
	I0203 12:27:17.236881   13136 command_runner.go:130] > 1ff01fa7d8c6
	I0203 12:27:17.236881   13136 command_runner.go:130] > 8ade10c0fb09
	I0203 12:27:17.237090   13136 command_runner.go:130] > 88c40ca9aa3c
	I0203 12:27:17.237475   13136 command_runner.go:130] > ebc67da1b9e9
	I0203 12:27:17.237475   13136 command_runner.go:130] > e3efb81aa459
	I0203 12:27:17.237475   13136 command_runner.go:130] > b1b473818438
	I0203 12:27:17.237475   13136 command_runner.go:130] > d8d9e598659f
	I0203 12:27:17.237475   13136 command_runner.go:130] > 16d03cfd685d
	I0203 12:27:17.237475   13136 command_runner.go:130] > d3c93fcfaa46
	I0203 12:27:17.248126   13136 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0203 12:27:17.283854   13136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 12:27:17.301586   13136 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0203 12:27:17.301679   13136 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0203 12:27:17.301745   13136 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0203 12:27:17.301745   13136 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 12:27:17.301745   13136 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 12:27:17.301745   13136 kubeadm.go:157] found existing configuration files:
	
	I0203 12:27:17.309587   13136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 12:27:17.326960   13136 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 12:27:17.327045   13136 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 12:27:17.336246   13136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 12:27:17.360990   13136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 12:27:17.377859   13136 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 12:27:17.377859   13136 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 12:27:17.388022   13136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 12:27:17.413284   13136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 12:27:17.429587   13136 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 12:27:17.429683   13136 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 12:27:17.438139   13136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 12:27:17.462144   13136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 12:27:17.479394   13136 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 12:27:17.479394   13136 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 12:27:17.488457   13136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 12:27:17.512799   13136 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 12:27:17.530695   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 12:27:17.759091   13136 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 12:27:17.759188   13136 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0203 12:27:17.759188   13136 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0203 12:27:17.759188   13136 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0203 12:27:17.759188   13136 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0203 12:27:17.759188   13136 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0203 12:27:17.759188   13136 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0203 12:27:17.759188   13136 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0203 12:27:17.759285   13136 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0203 12:27:17.759285   13136 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0203 12:27:17.759285   13136 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0203 12:27:17.759285   13136 command_runner.go:130] > [certs] Using the existing "sa" key
	I0203 12:27:17.759285   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 12:27:19.246920   13136 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 12:27:19.246920   13136 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 12:27:19.246920   13136 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0203 12:27:19.246920   13136 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 12:27:19.246920   13136 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 12:27:19.246920   13136 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 12:27:19.246920   13136 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.4876179s)
	I0203 12:27:19.246920   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0203 12:27:19.546550   13136 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 12:27:19.546550   13136 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 12:27:19.546550   13136 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0203 12:27:19.546550   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 12:27:19.638798   13136 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 12:27:19.638798   13136 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 12:27:19.638798   13136 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 12:27:19.638798   13136 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 12:27:19.638798   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0203 12:27:19.721204   13136 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 12:27:19.725187   13136 api_server.go:52] waiting for apiserver process to appear ...
	I0203 12:27:19.733212   13136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 12:27:20.237681   13136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 12:27:20.734957   13136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 12:27:21.236573   13136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 12:27:21.736191   13136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 12:27:21.760186   13136 command_runner.go:130] > 1987
	I0203 12:27:21.761926   13136 api_server.go:72] duration metric: took 2.0366371s to wait for apiserver process to appear ...
	I0203 12:27:21.761926   13136 api_server.go:88] waiting for apiserver healthz status ...
	I0203 12:27:21.761991   13136 api_server.go:253] Checking apiserver healthz at https://172.25.12.244:8443/healthz ...
	I0203 12:27:24.805810   13136 api_server.go:279] https://172.25.12.244:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0203 12:27:24.805810   13136 api_server.go:103] status: https://172.25.12.244:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0203 12:27:24.805810   13136 api_server.go:253] Checking apiserver healthz at https://172.25.12.244:8443/healthz ...
	I0203 12:27:24.892495   13136 api_server.go:279] https://172.25.12.244:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0203 12:27:24.892606   13136 api_server.go:103] status: https://172.25.12.244:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0203 12:27:25.263040   13136 api_server.go:253] Checking apiserver healthz at https://172.25.12.244:8443/healthz ...
	I0203 12:27:25.272440   13136 api_server.go:279] https://172.25.12.244:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 12:27:25.272772   13136 api_server.go:103] status: https://172.25.12.244:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 12:27:25.762168   13136 api_server.go:253] Checking apiserver healthz at https://172.25.12.244:8443/healthz ...
	I0203 12:27:25.775975   13136 api_server.go:279] https://172.25.12.244:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 12:27:25.775975   13136 api_server.go:103] status: https://172.25.12.244:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 12:27:26.262974   13136 api_server.go:253] Checking apiserver healthz at https://172.25.12.244:8443/healthz ...
	I0203 12:27:26.271990   13136 api_server.go:279] https://172.25.12.244:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 12:27:26.271990   13136 api_server.go:103] status: https://172.25.12.244:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 12:27:26.763287   13136 api_server.go:253] Checking apiserver healthz at https://172.25.12.244:8443/healthz ...
	I0203 12:27:26.770907   13136 api_server.go:279] https://172.25.12.244:8443/healthz returned 200:
	ok
	I0203 12:27:26.771574   13136 round_trippers.go:463] GET https://172.25.12.244:8443/version
	I0203 12:27:26.771621   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:26.771654   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:26.771654   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:26.782427   13136 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0203 12:27:26.782427   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:26.782427   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:26.782427   13136 round_trippers.go:580]     Content-Length: 263
	I0203 12:27:26.782427   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:26 GMT
	I0203 12:27:26.782427   13136 round_trippers.go:580]     Audit-Id: 88e97992-82b7-456c-adfd-c35de1f165c8
	I0203 12:27:26.782427   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:26.782427   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:26.782427   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:26.782427   13136 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "32",
	  "gitVersion": "v1.32.1",
	  "gitCommit": "e9c9be4007d1664e68796af02b8978640d2c1b26",
	  "gitTreeState": "clean",
	  "buildDate": "2025-01-15T14:31:55Z",
	  "goVersion": "go1.23.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0203 12:27:26.782427   13136 api_server.go:141] control plane version: v1.32.1
	I0203 12:27:26.782427   13136 api_server.go:131] duration metric: took 5.0204447s to wait for apiserver health ...
	I0203 12:27:26.782427   13136 cni.go:84] Creating CNI manager for ""
	I0203 12:27:26.782427   13136 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0203 12:27:26.785304   13136 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0203 12:27:26.797151   13136 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0203 12:27:26.811312   13136 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0203 12:27:26.811312   13136 command_runner.go:130] >   Size: 3103192   	Blocks: 6064       IO Block: 4096   regular file
	I0203 12:27:26.811312   13136 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0203 12:27:26.811312   13136 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0203 12:27:26.811312   13136 command_runner.go:130] > Access: 2025-02-03 12:26:01.341859678 +0000
	I0203 12:27:26.811312   13136 command_runner.go:130] > Modify: 2025-01-14 09:03:58.000000000 +0000
	I0203 12:27:26.811312   13136 command_runner.go:130] > Change: 2025-02-03 12:25:49.033000000 +0000
	I0203 12:27:26.811312   13136 command_runner.go:130] >  Birth: -
	I0203 12:27:26.811312   13136 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0203 12:27:26.811312   13136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0203 12:27:26.891517   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0203 12:27:27.962216   13136 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0203 12:27:27.962216   13136 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0203 12:27:27.962216   13136 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0203 12:27:27.962216   13136 command_runner.go:130] > daemonset.apps/kindnet configured
	I0203 12:27:27.962216   13136 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.0706875s)
	I0203 12:27:27.962216   13136 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 12:27:27.962216   13136 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0203 12:27:27.962216   13136 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0203 12:27:27.962827   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods
	I0203 12:27:27.962827   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:27.962902   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:27.962902   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:27.969361   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:27:27.969361   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:27.969361   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:27.969361   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:27.969361   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:27.969361   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:27 GMT
	I0203 12:27:27.969361   13136 round_trippers.go:580]     Audit-Id: 4b7e5a12-4ad9-4445-bd24-cef0f8ecc3a0
	I0203 12:27:27.969361   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:27.970591   13136 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1831"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 91050 chars]
	I0203 12:27:27.975815   13136 system_pods.go:59] 12 kube-system pods found
	I0203 12:27:27.976790   13136 system_pods.go:61] "coredns-668d6bf9bc-v2gkp" [c94a77a3-456e-41d7-b9ad-7aa97e0264a7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0203 12:27:27.976790   13136 system_pods.go:61] "etcd-multinode-749300" [a956084b-f454-4ef5-8fed-7c189cb74ab0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0203 12:27:27.976790   13136 system_pods.go:61] "kindnet-bckxx" [006a41d1-55d5-479a-856f-5670f4ae6588] Running
	I0203 12:27:27.976790   13136 system_pods.go:61] "kindnet-dc9wq" [debecd3d-64fd-46e8-8d28-ca97e75cfcfe] Running
	I0203 12:27:27.976790   13136 system_pods.go:61] "kindnet-h6m57" [67c155d5-fb9b-42f5-8e64-865c44a5d4e6] Running
	I0203 12:27:27.976790   13136 system_pods.go:61] "kube-apiserver-multinode-749300" [72513861-07f4-4533-8f55-8b3cce215b4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0203 12:27:27.976790   13136 system_pods.go:61] "kube-controller-manager-multinode-749300" [63c0818c-a0e6-40d1-b0c4-1cd633c91afb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0203 12:27:27.976790   13136 system_pods.go:61] "kube-proxy-9g92t" [1709b874-4fee-41f5-8d30-24912b2fa725] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0203 12:27:27.976790   13136 system_pods.go:61] "kube-proxy-ggnq7" [63bc9e77-90e3-40c5-9b49-e95d2bfd7426] Running
	I0203 12:27:27.976790   13136 system_pods.go:61] "kube-proxy-w8wrd" [f81878fa-528f-4bdf-90ec-83f54166370e] Running
	I0203 12:27:27.976790   13136 system_pods.go:61] "kube-scheduler-multinode-749300" [8e4c1052-9dca-466d-833b-eff318b977d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0203 12:27:27.976790   13136 system_pods.go:61] "storage-provisioner" [4c991afa-7bb0-4d52-bded-22d68037b5ae] Running
	I0203 12:27:27.976790   13136 system_pods.go:74] duration metric: took 14.5737ms to wait for pod list to return data ...
	I0203 12:27:27.976790   13136 node_conditions.go:102] verifying NodePressure condition ...
	I0203 12:27:27.976790   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes
	I0203 12:27:27.976790   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:27.976790   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:27.976790   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:27.981491   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:27.981491   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:27.981491   13136 round_trippers.go:580]     Audit-Id: f38bc849-5eec-47c6-b79f-6f65cc41c97e
	I0203 12:27:27.981491   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:27.981491   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:27.981491   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:27.981491   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:27.981491   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:27 GMT
	I0203 12:27:27.981491   13136 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1831"},"items":[{"metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1751","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15625 chars]
	I0203 12:27:27.983178   13136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 12:27:27.983178   13136 node_conditions.go:123] node cpu capacity is 2
	I0203 12:27:27.983252   13136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 12:27:27.983252   13136 node_conditions.go:123] node cpu capacity is 2
	I0203 12:27:27.983252   13136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 12:27:27.983252   13136 node_conditions.go:123] node cpu capacity is 2
	I0203 12:27:27.983252   13136 node_conditions.go:105] duration metric: took 6.4613ms to run NodePressure ...
	I0203 12:27:27.983252   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 12:27:28.578159   13136 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0203 12:27:28.578159   13136 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0203 12:27:28.578256   13136 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0203 12:27:28.578336   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0203 12:27:28.578336   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.578336   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.578336   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.581675   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:28.581746   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.581746   13136 round_trippers.go:580]     Audit-Id: 01db7b09-7ad2-4996-911e-f77a5f75dbee
	I0203 12:27:28.581746   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.581746   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.581746   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.581746   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.581746   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.581924   13136 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1844"},"items":[{"metadata":{"name":"etcd-multinode-749300","namespace":"kube-system","uid":"a956084b-f454-4ef5-8fed-7c189cb74ab0","resourceVersion":"1803","creationTimestamp":"2025-02-03T12:27:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.12.244:2379","kubernetes.io/config.hash":"f85eb916773a482447e41aa40aaff233","kubernetes.io/config.mirror":"f85eb916773a482447e41aa40aaff233","kubernetes.io/config.seen":"2025-02-03T12:27:19.750780815Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:27:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 31685 chars]
	I0203 12:27:28.584130   13136 kubeadm.go:739] kubelet initialised
	I0203 12:27:28.584207   13136 kubeadm.go:740] duration metric: took 5.8739ms waiting for restarted kubelet to initialise ...
	I0203 12:27:28.584207   13136 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 12:27:28.584283   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods
	I0203 12:27:28.584359   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.584359   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.584359   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.598596   13136 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0203 12:27:28.598596   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.598596   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.598596   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.598596   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.598596   13136 round_trippers.go:580]     Audit-Id: 20e171f0-0ab9-41de-8ac2-a9b4f5bb53c9
	I0203 12:27:28.598596   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.598596   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.600213   13136 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1845"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90859 chars]
	I0203 12:27:28.603009   13136 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace to be "Ready" ...
	I0203 12:27:28.603009   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:27:28.603009   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.603009   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.604010   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.609179   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:28.609179   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.609179   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.609179   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.609179   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.609179   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.609179   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.609248   13136 round_trippers.go:580]     Audit-Id: bddd1d6a-350b-421a-81c0-0dfd169a8647
	I0203 12:27:28.609312   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:27:28.610019   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:28.610019   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.610019   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.610019   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.615051   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:28.615051   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.615051   13136 round_trippers.go:580]     Audit-Id: 6981f24b-5d83-4ed7-be9a-b49d11381fa0
	I0203 12:27:28.615051   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.615051   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.615131   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.615131   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.615131   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.615397   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:28.615845   13136 pod_ready.go:98] node "multinode-749300" hosting pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:28.615872   13136 pod_ready.go:82] duration metric: took 12.8635ms for pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace to be "Ready" ...
	E0203 12:27:28.615872   13136 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-749300" hosting pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:28.615872   13136 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:27:28.615872   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-749300
	I0203 12:27:28.615872   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.615872   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.615872   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.632686   13136 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0203 12:27:28.632791   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.632791   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.632791   13136 round_trippers.go:580]     Audit-Id: 692cc1ac-b2dd-4851-b773-29173b51855c
	I0203 12:27:28.632791   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.632791   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.632791   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.632791   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.633074   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-749300","namespace":"kube-system","uid":"a956084b-f454-4ef5-8fed-7c189cb74ab0","resourceVersion":"1803","creationTimestamp":"2025-02-03T12:27:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.12.244:2379","kubernetes.io/config.hash":"f85eb916773a482447e41aa40aaff233","kubernetes.io/config.mirror":"f85eb916773a482447e41aa40aaff233","kubernetes.io/config.seen":"2025-02-03T12:27:19.750780815Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:27:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6830 chars]
	I0203 12:27:28.633703   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:28.633703   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.633703   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.633703   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.640278   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:27:28.641359   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.641359   13136 round_trippers.go:580]     Audit-Id: bf8e2cf8-d33e-458d-bdb7-9408d32eb7b0
	I0203 12:27:28.641421   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.641421   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.641421   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.641421   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.641421   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.641566   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:28.641790   13136 pod_ready.go:98] node "multinode-749300" hosting pod "etcd-multinode-749300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:28.641790   13136 pod_ready.go:82] duration metric: took 25.9173ms for pod "etcd-multinode-749300" in "kube-system" namespace to be "Ready" ...
	E0203 12:27:28.641790   13136 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-749300" hosting pod "etcd-multinode-749300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:28.641790   13136 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:27:28.641790   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-749300
	I0203 12:27:28.641790   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.641790   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.641790   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.647352   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:28.647352   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.647352   13136 round_trippers.go:580]     Audit-Id: 02090c82-5dfe-4079-beea-8e3aa8909e25
	I0203 12:27:28.647352   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.647352   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.647352   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.647352   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.647352   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.647352   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-749300","namespace":"kube-system","uid":"72513861-07f4-4533-8f55-8b3cce215b4c","resourceVersion":"1804","creationTimestamp":"2025-02-03T12:27:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.12.244:8443","kubernetes.io/config.hash":"20275825c8d44051c01f8d920b297acd","kubernetes.io/config.mirror":"20275825c8d44051c01f8d920b297acd","kubernetes.io/config.seen":"2025-02-03T12:27:19.750137111Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:27:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8283 chars]
	I0203 12:27:28.648387   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:28.648387   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.648387   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.648387   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.662371   13136 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0203 12:27:28.663319   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.663386   13136 round_trippers.go:580]     Audit-Id: b04c6cb4-42b5-4afb-9b14-79243ccf21e2
	I0203 12:27:28.663386   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.663386   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.663386   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.663386   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.663386   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.663386   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:28.663386   13136 pod_ready.go:98] node "multinode-749300" hosting pod "kube-apiserver-multinode-749300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:28.663386   13136 pod_ready.go:82] duration metric: took 21.5954ms for pod "kube-apiserver-multinode-749300" in "kube-system" namespace to be "Ready" ...
	E0203 12:27:28.663386   13136 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-749300" hosting pod "kube-apiserver-multinode-749300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:28.663386   13136 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:27:28.663386   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-749300
	I0203 12:27:28.663386   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.663386   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.663386   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.670390   13136 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 12:27:28.670538   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.670584   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.670584   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.670584   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.670584   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.670584   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.670584   13136 round_trippers.go:580]     Audit-Id: 4270eb5b-fb6c-4928-8e23-644a50c48faf
	I0203 12:27:28.670878   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-749300","namespace":"kube-system","uid":"63c0818c-a0e6-40d1-b0c4-1cd633c91afb","resourceVersion":"1800","creationTimestamp":"2025-02-03T12:04:55Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c25845f184856fc216b76acafcf34ee9","kubernetes.io/config.mirror":"c25845f184856fc216b76acafcf34ee9","kubernetes.io/config.seen":"2025-02-03T12:04:55.455020645Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:04:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7732 chars]
	I0203 12:27:28.671523   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:28.671583   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.671583   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.671583   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.673667   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:27:28.673667   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.673667   13136 round_trippers.go:580]     Audit-Id: 48a91d29-157a-4efa-a3a4-8a5598956637
	I0203 12:27:28.673667   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.673667   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.673667   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.673667   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.673667   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.674247   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:28.674747   13136 pod_ready.go:98] node "multinode-749300" hosting pod "kube-controller-manager-multinode-749300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:28.674747   13136 pod_ready.go:82] duration metric: took 11.3617ms for pod "kube-controller-manager-multinode-749300" in "kube-system" namespace to be "Ready" ...
	E0203 12:27:28.674811   13136 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-749300" hosting pod "kube-controller-manager-multinode-749300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:28.674811   13136 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9g92t" in "kube-system" namespace to be "Ready" ...
	I0203 12:27:28.778670   13136 request.go:632] Waited for 103.8016ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g92t
	I0203 12:27:28.778670   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g92t
	I0203 12:27:28.778670   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.778670   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.778670   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.782680   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:28.783079   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.783079   13136 round_trippers.go:580]     Audit-Id: f9fed813-67cc-4bfa-819a-8ea2ab62c5da
	I0203 12:27:28.783079   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.783079   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.783079   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.783079   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.783079   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.783667   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9g92t","generateName":"kube-proxy-","namespace":"kube-system","uid":"1709b874-4fee-41f5-8d30-24912b2fa725","resourceVersion":"1844","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"04519c88-48ba-439f-bd57-a9c8b286d988","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04519c88-48ba-439f-bd57-a9c8b286d988\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6400 chars]
	I0203 12:27:28.978613   13136 request.go:632] Waited for 193.25ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:28.979036   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:28.979036   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.979036   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.979036   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.982210   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:28.982670   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.982670   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.982670   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.982670   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.982670   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.982670   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.982670   13136 round_trippers.go:580]     Audit-Id: 8830ef73-1d9e-4a80-a295-0387fdd97530
	I0203 12:27:28.982889   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:28.983335   13136 pod_ready.go:98] node "multinode-749300" hosting pod "kube-proxy-9g92t" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:28.983335   13136 pod_ready.go:82] duration metric: took 308.5208ms for pod "kube-proxy-9g92t" in "kube-system" namespace to be "Ready" ...
	E0203 12:27:28.983421   13136 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-749300" hosting pod "kube-proxy-9g92t" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:28.983421   13136 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-ggnq7" in "kube-system" namespace to be "Ready" ...
	I0203 12:27:29.178563   13136 request.go:632] Waited for 195.0576ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggnq7
	I0203 12:27:29.178563   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggnq7
	I0203 12:27:29.178563   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:29.178563   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:29.178563   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:29.183465   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:29.183465   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:29.183465   13136 round_trippers.go:580]     Audit-Id: a4811cf4-2cc2-46bf-b6f5-5d8c30923327
	I0203 12:27:29.183465   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:29.183465   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:29.183465   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:29.183465   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:29.183465   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:29 GMT
	I0203 12:27:29.183795   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ggnq7","generateName":"kube-proxy-","namespace":"kube-system","uid":"63bc9e77-90e3-40c5-9b49-e95d2bfd7426","resourceVersion":"625","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"04519c88-48ba-439f-bd57-a9c8b286d988","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04519c88-48ba-439f-bd57-a9c8b286d988\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6192 chars]
	I0203 12:27:29.379120   13136 request.go:632] Waited for 194.4708ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:27:29.379120   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:27:29.379120   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:29.379120   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:29.379120   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:29.382603   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:29.383573   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:29.383573   13136 round_trippers.go:580]     Audit-Id: c725a749-bc11-4ced-a102-1790fd5816ba
	I0203 12:27:29.383573   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:29.383573   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:29.383573   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:29.383573   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:29.383573   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:29 GMT
	I0203 12:27:29.383710   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"1637","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3825 chars]
	I0203 12:27:29.384288   13136 pod_ready.go:93] pod "kube-proxy-ggnq7" in "kube-system" namespace has status "Ready":"True"
	I0203 12:27:29.384288   13136 pod_ready.go:82] duration metric: took 400.8629ms for pod "kube-proxy-ggnq7" in "kube-system" namespace to be "Ready" ...
	I0203 12:27:29.384288   13136 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-w8wrd" in "kube-system" namespace to be "Ready" ...
	I0203 12:27:29.578710   13136 request.go:632] Waited for 194.3449ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w8wrd
	I0203 12:27:29.578710   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w8wrd
	I0203 12:27:29.578710   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:29.578710   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:29.578710   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:29.582860   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:29.582937   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:29.582937   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:29.582937   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:29.582937   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:29.582937   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:29 GMT
	I0203 12:27:29.582937   13136 round_trippers.go:580]     Audit-Id: 8beed07b-295a-4536-b88c-b8fc072b7160
	I0203 12:27:29.582937   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:29.583136   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-w8wrd","generateName":"kube-proxy-","namespace":"kube-system","uid":"f81878fa-528f-4bdf-90ec-83f54166370e","resourceVersion":"1727","creationTimestamp":"2025-02-03T12:12:30Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"04519c88-48ba-439f-bd57-a9c8b286d988","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:12:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04519c88-48ba-439f-bd57-a9c8b286d988\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6418 chars]
	I0203 12:27:29.779113   13136 request.go:632] Waited for 195.2874ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m03
	I0203 12:27:29.779113   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m03
	I0203 12:27:29.779499   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:29.779499   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:29.779499   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:29.783724   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:29.783724   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:29.783724   13136 round_trippers.go:580]     Audit-Id: 076d6630-48d5-4d1d-bbb2-8b6cb1857772
	I0203 12:27:29.783724   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:29.783724   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:29.783724   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:29.783724   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:29.783724   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:29 GMT
	I0203 12:27:29.784124   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m03","uid":"1765fbe7-e04a-4337-8284-6152642b17de","resourceVersion":"1838","creationTimestamp":"2025-02-03T12:22:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_22_58_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:22:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4398 chars]
	I0203 12:27:29.784635   13136 pod_ready.go:98] node "multinode-749300-m03" hosting pod "kube-proxy-w8wrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300-m03" has status "Ready":"Unknown"
	I0203 12:27:29.784705   13136 pod_ready.go:82] duration metric: took 400.4125ms for pod "kube-proxy-w8wrd" in "kube-system" namespace to be "Ready" ...
	E0203 12:27:29.784705   13136 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-749300-m03" hosting pod "kube-proxy-w8wrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300-m03" has status "Ready":"Unknown"
	I0203 12:27:29.784705   13136 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:27:29.978718   13136 request.go:632] Waited for 193.9433ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-749300
	I0203 12:27:29.978718   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-749300
	I0203 12:27:29.978718   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:29.978718   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:29.978718   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:29.983979   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:29.983979   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:29.983979   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:29.983979   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:30 GMT
	I0203 12:27:29.983979   13136 round_trippers.go:580]     Audit-Id: 51270b46-733a-4c19-8837-073f6f0e1762
	I0203 12:27:29.983979   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:29.983979   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:29.984082   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:29.984238   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-749300","namespace":"kube-system","uid":"8e4c1052-9dca-466d-833b-eff318b977d7","resourceVersion":"1802","creationTimestamp":"2025-02-03T12:04:55Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a4dc8a8db691940bb17375ec22c0921e","kubernetes.io/config.mirror":"a4dc8a8db691940bb17375ec22c0921e","kubernetes.io/config.seen":"2025-02-03T12:04:55.455022345Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:04:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5807 chars]
	I0203 12:27:30.179107   13136 request.go:632] Waited for 194.4324ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:30.179107   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:30.179107   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:30.179107   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:30.179107   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:30.183443   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:30.184177   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:30.184177   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:30.184177   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:30.184177   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:30.184177   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:30 GMT
	I0203 12:27:30.184177   13136 round_trippers.go:580]     Audit-Id: a1b177a2-c298-4872-9b36-d2d11c68f6f5
	I0203 12:27:30.184177   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:30.184557   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:30.185089   13136 pod_ready.go:98] node "multinode-749300" hosting pod "kube-scheduler-multinode-749300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:30.185089   13136 pod_ready.go:82] duration metric: took 400.3795ms for pod "kube-scheduler-multinode-749300" in "kube-system" namespace to be "Ready" ...
	E0203 12:27:30.185089   13136 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-749300" hosting pod "kube-scheduler-multinode-749300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:30.185157   13136 pod_ready.go:39] duration metric: took 1.600933s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 12:27:30.185189   13136 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0203 12:27:30.202790   13136 command_runner.go:130] > -16
	I0203 12:27:30.202790   13136 ops.go:34] apiserver oom_adj: -16
	I0203 12:27:30.202790   13136 kubeadm.go:597] duration metric: took 13.1141562s to restartPrimaryControlPlane
	I0203 12:27:30.202943   13136 kubeadm.go:394] duration metric: took 13.1751491s to StartCluster
	I0203 12:27:30.202943   13136 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:27:30.203202   13136 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 12:27:30.204742   13136 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:27:30.206192   13136 start.go:235] Will wait 6m0s for node &{Name: IP:172.25.12.244 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 12:27:30.206192   13136 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0203 12:27:30.206477   13136 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:27:30.209434   13136 out.go:177] * Verifying Kubernetes components...
	I0203 12:27:30.213569   13136 out.go:177] * Enabled addons: 
	I0203 12:27:30.220007   13136 addons.go:514] duration metric: took 13.8147ms for enable addons: enabled=[]
	I0203 12:27:30.224360   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:27:30.476666   13136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 12:27:30.507616   13136 node_ready.go:35] waiting up to 6m0s for node "multinode-749300" to be "Ready" ...
	I0203 12:27:30.507840   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:30.507840   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:30.507923   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:30.507923   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:30.510776   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:27:30.511553   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:30.511553   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:30 GMT
	I0203 12:27:30.511553   13136 round_trippers.go:580]     Audit-Id: c7a5e1c8-57d7-4efa-a0fc-09f3d91e8274
	I0203 12:27:30.511553   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:30.511553   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:30.511553   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:30.511553   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:30.511706   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:31.008151   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:31.008544   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:31.008544   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:31.008544   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:31.013207   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:31.013282   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:31.013282   13136 round_trippers.go:580]     Audit-Id: c045ce5c-99e2-4667-a8cd-9ec9b890debd
	I0203 12:27:31.013282   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:31.013282   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:31.013282   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:31.013282   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:31.013282   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:31 GMT
	I0203 12:27:31.013527   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:31.508250   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:31.508250   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:31.508250   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:31.508250   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:31.512972   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:31.512972   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:31.512972   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:31.512972   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:31.512972   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:31.512972   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:31 GMT
	I0203 12:27:31.512972   13136 round_trippers.go:580]     Audit-Id: 36d46f36-b463-4858-8031-8598ced3026b
	I0203 12:27:31.512972   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:31.512972   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:32.008477   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:32.008477   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:32.008477   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:32.008477   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:32.012796   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:32.012796   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:32.012796   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:32.012796   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:32.012796   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:32 GMT
	I0203 12:27:32.012796   13136 round_trippers.go:580]     Audit-Id: 416fdfdd-4fad-43d5-8f41-e19a6424eff4
	I0203 12:27:32.012796   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:32.012796   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:32.012796   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:32.507842   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:32.507842   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:32.507842   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:32.507842   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:32.512911   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:32.512974   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:32.513021   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:32.513021   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:32.513021   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:32 GMT
	I0203 12:27:32.513021   13136 round_trippers.go:580]     Audit-Id: f2ff10b3-e952-4a87-9901-aab74a1df40f
	I0203 12:27:32.513021   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:32.513021   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:32.513145   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:32.513788   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:33.007951   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:33.008424   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:33.008424   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:33.008424   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:33.015835   13136 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 12:27:33.015835   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:33.015835   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:33.015835   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:33.015835   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:33.015835   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:33.015835   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:33 GMT
	I0203 12:27:33.015835   13136 round_trippers.go:580]     Audit-Id: bd59e2d8-37d0-434f-b318-1504d80acb12
	I0203 12:27:33.015835   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:33.509325   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:33.509325   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:33.509325   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:33.509325   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:33.512967   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:33.513083   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:33.513083   13136 round_trippers.go:580]     Audit-Id: b022ddbe-3ddb-4415-bbb9-a03268cbe56e
	I0203 12:27:33.513193   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:33.513193   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:33.513193   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:33.513193   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:33.513193   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:33 GMT
	I0203 12:27:33.513430   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:34.008319   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:34.008319   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:34.008319   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:34.008319   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:34.012486   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:34.012486   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:34.012486   13136 round_trippers.go:580]     Audit-Id: 14c5d798-b001-4970-92bd-db80f8ec2436
	I0203 12:27:34.012486   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:34.012486   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:34.012486   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:34.012486   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:34.012486   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:34 GMT
	I0203 12:27:34.012486   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:34.508430   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:34.508430   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:34.508430   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:34.508502   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:34.513022   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:34.513022   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:34.513022   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:34 GMT
	I0203 12:27:34.513022   13136 round_trippers.go:580]     Audit-Id: dfe2c8c7-ca2e-4b8e-8d18-1f2eb795336a
	I0203 12:27:34.513022   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:34.513022   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:34.513022   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:34.513141   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:34.513189   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:35.008273   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:35.008273   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:35.008273   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:35.008273   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:35.013658   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:35.013722   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:35.013722   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:35.013722   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:35.013763   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:35.013763   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:35 GMT
	I0203 12:27:35.013763   13136 round_trippers.go:580]     Audit-Id: 3e607dde-7638-4620-bbb7-9605c7a969a6
	I0203 12:27:35.013763   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:35.014030   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:35.014515   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:35.508732   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:35.508732   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:35.508732   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:35.508732   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:35.513437   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:35.513516   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:35.513516   13136 round_trippers.go:580]     Audit-Id: 822b8e93-c5eb-4054-b02c-67f2e5e1cce7
	I0203 12:27:35.513516   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:35.513516   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:35.513516   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:35.513605   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:35.513605   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:35 GMT
	I0203 12:27:35.513718   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:36.008116   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:36.008116   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:36.008116   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:36.008762   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:36.014511   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:36.014553   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:36.014553   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:36 GMT
	I0203 12:27:36.014553   13136 round_trippers.go:580]     Audit-Id: 41863f20-a232-4db3-9c62-a992f9cd8125
	I0203 12:27:36.014553   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:36.014553   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:36.014553   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:36.014553   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:36.014553   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:36.508775   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:36.508775   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:36.508775   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:36.508775   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:36.513523   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:36.513523   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:36.513523   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:36.513523   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:36.513523   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:36.513523   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:36 GMT
	I0203 12:27:36.513523   13136 round_trippers.go:580]     Audit-Id: d161e549-ac93-4829-b36c-8c3c5fcb9c82
	I0203 12:27:36.513523   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:36.513700   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:37.008800   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:37.008867   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:37.008867   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:37.008867   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:37.013383   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:37.013383   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:37.013383   13136 round_trippers.go:580]     Audit-Id: 6947c5d2-bd18-4339-ab7c-355a94dca74d
	I0203 12:27:37.013383   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:37.013383   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:37.013383   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:37.013383   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:37.013383   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:37 GMT
	I0203 12:27:37.013383   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:37.508109   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:37.508109   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:37.508109   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:37.508109   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:37.512690   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:37.512690   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:37.512690   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:37.512773   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:37.512773   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:37.512773   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:37 GMT
	I0203 12:27:37.512773   13136 round_trippers.go:580]     Audit-Id: bc41146d-8e9b-4a46-bbd4-721ac375fbd6
	I0203 12:27:37.512773   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:37.513121   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:37.513676   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:38.008797   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:38.008797   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:38.008797   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:38.008797   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:38.013740   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:38.013740   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:38.013845   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:38.013845   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:38.013845   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:38 GMT
	I0203 12:27:38.013845   13136 round_trippers.go:580]     Audit-Id: b4412b13-8d20-4017-932b-eaae432cb5c2
	I0203 12:27:38.013845   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:38.013845   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:38.014051   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:38.509516   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:38.509516   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:38.509516   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:38.509516   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:38.518281   13136 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0203 12:27:38.518281   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:38.518281   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:38.518281   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:38.518281   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:38.518281   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:38 GMT
	I0203 12:27:38.518281   13136 round_trippers.go:580]     Audit-Id: c295925e-b3e8-443e-a4eb-4840cd95329a
	I0203 12:27:38.518281   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:38.518281   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:39.008348   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:39.008348   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:39.008348   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:39.008348   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:39.013316   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:39.013316   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:39.013316   13136 round_trippers.go:580]     Audit-Id: 16c03dfe-dacc-4c41-bb0c-6a1e5586acc7
	I0203 12:27:39.013316   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:39.013316   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:39.013316   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:39.013316   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:39.013316   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:39 GMT
	I0203 12:27:39.013316   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:39.508417   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:39.508417   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:39.508417   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:39.508417   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:39.511841   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:39.511841   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:39.511841   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:39.511841   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:39.511841   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:39.511841   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:39 GMT
	I0203 12:27:39.511841   13136 round_trippers.go:580]     Audit-Id: 8e2055ac-aa56-42cb-ab22-a749b07a4bca
	I0203 12:27:39.511841   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:39.511841   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:40.008463   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:40.008463   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:40.008463   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:40.008463   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:40.011940   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:40.011940   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:40.011940   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:40.011940   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:40 GMT
	I0203 12:27:40.011940   13136 round_trippers.go:580]     Audit-Id: 6d8ed845-c69e-4ffa-b397-8c5b40203683
	I0203 12:27:40.011940   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:40.011940   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:40.011940   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:40.011940   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:40.012987   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:40.508720   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:40.508720   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:40.508720   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:40.508720   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:40.512753   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:40.512925   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:40.512994   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:40.513062   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:40 GMT
	I0203 12:27:40.513164   13136 round_trippers.go:580]     Audit-Id: a49ce82b-3b9b-4898-98fe-94c27801bf47
	I0203 12:27:40.513186   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:40.513186   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:40.513186   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:40.513186   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:41.008687   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:41.008687   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:41.008687   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:41.008687   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:41.013470   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:41.013470   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:41.013470   13136 round_trippers.go:580]     Audit-Id: e4fbfd5c-f296-47e8-8312-d997b5d82ce7
	I0203 12:27:41.013470   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:41.013470   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:41.013470   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:41.013470   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:41.013470   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:41 GMT
	I0203 12:27:41.013470   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:41.508313   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:41.508313   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:41.508313   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:41.508313   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:41.514266   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:41.514266   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:41.514378   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:41.514378   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:41 GMT
	I0203 12:27:41.514378   13136 round_trippers.go:580]     Audit-Id: df0d383e-d08c-4869-983e-f843dcb93919
	I0203 12:27:41.514378   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:41.514378   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:41.514378   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:41.514514   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:42.008729   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:42.008729   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:42.008729   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:42.008729   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:42.015343   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:27:42.015343   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:42.015441   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:42.015441   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:42.015441   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:42 GMT
	I0203 12:27:42.015441   13136 round_trippers.go:580]     Audit-Id: 936fb3f8-94f1-4466-83fa-901ea373139c
	I0203 12:27:42.015441   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:42.015441   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:42.015759   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:42.015880   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:42.508548   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:42.508548   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:42.508548   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:42.508548   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:42.513590   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:42.513716   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:42.513716   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:42 GMT
	I0203 12:27:42.513716   13136 round_trippers.go:580]     Audit-Id: fdc4180b-6cd5-49e8-8095-7fd73de99d23
	I0203 12:27:42.513716   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:42.513716   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:42.513716   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:42.513716   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:42.513962   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:43.008067   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:43.008067   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:43.008067   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:43.008067   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:43.012432   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:43.012432   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:43.012432   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:43.012432   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:43 GMT
	I0203 12:27:43.012432   13136 round_trippers.go:580]     Audit-Id: 6f3232af-ecf1-4e39-9843-205db8d993a0
	I0203 12:27:43.012432   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:43.012432   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:43.012432   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:43.012757   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:43.508716   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:43.508716   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:43.508716   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:43.508716   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:43.513113   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:43.513113   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:43.513188   13136 round_trippers.go:580]     Audit-Id: 46d84bfe-5909-46e7-9f54-398c874ed7d0
	I0203 12:27:43.513188   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:43.513188   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:43.513188   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:43.513188   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:43.513188   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:43 GMT
	I0203 12:27:43.513358   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:44.008524   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:44.008524   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:44.008524   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:44.008524   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:44.012491   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:44.012587   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:44.012587   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:44.012587   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:44.012587   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:44.012587   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:44 GMT
	I0203 12:27:44.012587   13136 round_trippers.go:580]     Audit-Id: 61476df4-83e4-49a7-800a-7b30f83515e2
	I0203 12:27:44.012664   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:44.012810   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:44.508162   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:44.508162   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:44.508162   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:44.508162   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:44.512359   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:44.512452   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:44.512452   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:44 GMT
	I0203 12:27:44.512452   13136 round_trippers.go:580]     Audit-Id: df381f70-7d17-451f-8a3a-e2f1443be16c
	I0203 12:27:44.512512   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:44.512512   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:44.512512   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:44.512512   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:44.512886   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:44.513312   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:45.008904   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:45.008904   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:45.008904   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:45.008904   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:45.012475   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:45.012710   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:45.012710   13136 round_trippers.go:580]     Audit-Id: a10b4f61-0364-4b8f-92a9-a5b26aa407a7
	I0203 12:27:45.012710   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:45.012710   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:45.012710   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:45.012710   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:45.012710   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:45 GMT
	I0203 12:27:45.013109   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:45.509060   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:45.509060   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:45.509060   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:45.509060   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:45.512772   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:45.513425   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:45.513425   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:45.513425   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:45.513425   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:45.513425   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:45 GMT
	I0203 12:27:45.513425   13136 round_trippers.go:580]     Audit-Id: da278f94-e32d-4aef-bb62-f626e6360621
	I0203 12:27:45.513425   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:45.513603   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:46.008387   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:46.008387   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:46.008387   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:46.008387   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:46.012738   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:46.013019   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:46.013019   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:46.013019   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:46.013019   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:46.013019   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:46.013019   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:46 GMT
	I0203 12:27:46.013019   13136 round_trippers.go:580]     Audit-Id: c6d2145a-3b10-460f-a7eb-7b173102cd21
	I0203 12:27:46.013234   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:46.508548   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:46.508548   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:46.508548   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:46.508548   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:46.512557   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:46.512641   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:46.512641   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:46.512641   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:46.512641   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:46.512641   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:46.512716   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:46 GMT
	I0203 12:27:46.512716   13136 round_trippers.go:580]     Audit-Id: bab84ac9-cf15-40c0-b73e-223248cc06fd
	I0203 12:27:46.512871   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:46.513452   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:47.008692   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:47.008692   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:47.008692   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:47.008692   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:47.013229   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:47.013342   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:47.013342   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:47.013342   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:47.013342   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:47.013342   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:47 GMT
	I0203 12:27:47.013342   13136 round_trippers.go:580]     Audit-Id: 3e4372ec-e4f9-48bb-ac64-169b7246c511
	I0203 12:27:47.013445   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:47.013668   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:47.508406   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:47.508406   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:47.508406   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:47.508406   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:47.513195   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:47.513195   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:47.513195   13136 round_trippers.go:580]     Audit-Id: 5f64dfb6-9f52-4dde-9cbb-e06ed63d72e6
	I0203 12:27:47.513195   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:47.513195   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:47.513195   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:47.513195   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:47.513195   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:47 GMT
	I0203 12:27:47.513450   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:48.008483   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:48.008483   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:48.008483   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:48.008483   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:48.013170   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:48.013170   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:48.013170   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:48.013170   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:48.013170   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:48.013170   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:48 GMT
	I0203 12:27:48.013170   13136 round_trippers.go:580]     Audit-Id: 20eb517a-e5ae-43d0-be0c-baf2625f7c39
	I0203 12:27:48.013170   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:48.013546   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:48.509096   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:48.509096   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:48.509096   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:48.509096   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:48.513384   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:48.513469   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:48.513469   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:48.513469   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:48.513469   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:48 GMT
	I0203 12:27:48.513469   13136 round_trippers.go:580]     Audit-Id: d0dce5d1-6ca2-466e-8f74-aa259aa5a2b7
	I0203 12:27:48.513469   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:48.513469   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:48.513642   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:48.513841   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:49.008688   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:49.008688   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:49.008688   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:49.008688   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:49.012736   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:49.012736   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:49.012736   13136 round_trippers.go:580]     Audit-Id: 63d253c5-bdc5-49bb-943b-38f0802a49b2
	I0203 12:27:49.012736   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:49.012736   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:49.012736   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:49.012736   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:49.012876   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:49 GMT
	I0203 12:27:49.013023   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:49.508487   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:49.508487   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:49.508487   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:49.508487   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:49.513724   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:49.513724   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:49.513724   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:49.513811   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:49 GMT
	I0203 12:27:49.513811   13136 round_trippers.go:580]     Audit-Id: 1e581171-7253-4d90-a7e3-0156223c62a3
	I0203 12:27:49.513811   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:49.513811   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:49.513811   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:49.513991   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:50.008296   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:50.008296   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:50.008296   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:50.008296   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:50.012266   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:50.012352   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:50.012352   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:50.012352   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:50 GMT
	I0203 12:27:50.012352   13136 round_trippers.go:580]     Audit-Id: 691892a2-abfe-4446-868e-6e24d46d15e3
	I0203 12:27:50.012352   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:50.012352   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:50.012352   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:50.012604   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:50.508688   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:50.508688   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:50.508688   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:50.508688   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:50.512566   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:50.512566   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:50.512566   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:50 GMT
	I0203 12:27:50.512566   13136 round_trippers.go:580]     Audit-Id: 6bc1d9f4-3369-4342-baf3-cced55a145b5
	I0203 12:27:50.512566   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:50.512566   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:50.512566   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:50.512566   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:50.512566   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:51.008955   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:51.008955   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:51.008955   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:51.009113   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:51.012878   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:51.012878   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:51.012878   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:51.012878   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:51 GMT
	I0203 12:27:51.012878   13136 round_trippers.go:580]     Audit-Id: 62902a23-896e-4efe-9940-464512caab66
	I0203 12:27:51.012878   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:51.012878   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:51.012878   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:51.013200   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:51.013795   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:51.509425   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:51.509425   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:51.509425   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:51.509425   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:51.514147   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:51.514147   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:51.514147   13136 round_trippers.go:580]     Audit-Id: 39e97727-7c5f-4679-b5fd-5fd96dbc75cc
	I0203 12:27:51.514147   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:51.514147   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:51.514147   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:51.514147   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:51.514147   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:51 GMT
	I0203 12:27:51.514147   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:52.009219   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:52.009219   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:52.009219   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:52.009219   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:52.013565   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:52.013565   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:52.013565   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:52.013565   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:52 GMT
	I0203 12:27:52.013565   13136 round_trippers.go:580]     Audit-Id: 05f7f6d1-4ebf-4be1-8976-7bfdf9bbab45
	I0203 12:27:52.013565   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:52.013565   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:52.013565   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:52.013882   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:52.508024   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:52.508024   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:52.508024   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:52.508024   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:52.512934   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:52.512934   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:52.512934   13136 round_trippers.go:580]     Audit-Id: 64dd654a-9f9e-4a9e-ace1-82437fa2cbcb
	I0203 12:27:52.512934   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:52.512934   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:52.512934   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:52.512934   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:52.512934   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:52 GMT
	I0203 12:27:52.513058   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:53.008211   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:53.008211   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:53.008211   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:53.008211   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:53.013220   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:53.013302   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:53.013302   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:53.013302   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:53.013302   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:53.013370   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:53 GMT
	I0203 12:27:53.013370   13136 round_trippers.go:580]     Audit-Id: b5738682-fb39-4d7f-9a31-2085e1c652d9
	I0203 12:27:53.013370   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:53.014259   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:53.015161   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:53.508050   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:53.508050   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:53.508050   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:53.508050   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:53.512709   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:53.512709   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:53.512709   13136 round_trippers.go:580]     Audit-Id: 938d616a-fe61-43ec-8408-83f01412535c
	I0203 12:27:53.512709   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:53.512709   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:53.512709   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:53.512709   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:53.512709   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:53 GMT
	I0203 12:27:53.512709   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:54.008647   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:54.008647   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:54.008647   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:54.008647   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:54.015318   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:27:54.015318   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:54.015318   13136 round_trippers.go:580]     Audit-Id: 82442fc1-750a-4a4a-b139-1ce5b2a7ae3f
	I0203 12:27:54.015318   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:54.015318   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:54.015318   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:54.015318   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:54.015318   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:54 GMT
	I0203 12:27:54.016289   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:54.508517   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:54.508517   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:54.508517   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:54.508517   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:54.513698   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:54.513698   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:54.513698   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:54.513698   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:54.513698   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:54 GMT
	I0203 12:27:54.513698   13136 round_trippers.go:580]     Audit-Id: fdbd15d4-ea3c-4b5c-99b9-807ccaa99c59
	I0203 12:27:54.513698   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:54.513698   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:54.513996   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:55.009506   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:55.009506   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:55.009506   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:55.009506   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:55.012848   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:55.013541   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:55.013541   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:55.013541   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:55 GMT
	I0203 12:27:55.013541   13136 round_trippers.go:580]     Audit-Id: 93ecdf96-b217-4d83-b1f3-301eee2a5b80
	I0203 12:27:55.013541   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:55.013541   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:55.013541   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:55.013965   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:55.509270   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:55.509270   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:55.509270   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:55.509270   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:55.513470   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:55.513470   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:55.513470   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:55.513470   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:55.513470   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:55.513470   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:55.513470   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:55 GMT
	I0203 12:27:55.513470   13136 round_trippers.go:580]     Audit-Id: bdfa3e70-0afa-4651-bccc-61ba71596f53
	I0203 12:27:55.513707   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:55.514158   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:56.008464   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:56.008464   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:56.008464   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:56.008464   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:56.012050   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:56.012836   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:56.012836   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:56.012836   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:56.012836   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:56.012836   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:56.012836   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:56 GMT
	I0203 12:27:56.012836   13136 round_trippers.go:580]     Audit-Id: e78281b6-b1b6-4e7b-a09c-8e475f4467f8
	I0203 12:27:56.013172   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:56.508769   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:56.509218   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:56.509218   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:56.509218   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:56.513314   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:56.513314   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:56.513314   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:56.513314   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:56.513314   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:56.513314   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:56 GMT
	I0203 12:27:56.513314   13136 round_trippers.go:580]     Audit-Id: 2374f17c-fc65-4046-b6a4-67f5de6848cd
	I0203 12:27:56.513314   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:56.513551   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:57.008174   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:57.008704   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:57.008704   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:57.008798   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:57.013183   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:57.013183   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:57.013183   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:57.013183   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:57.013183   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:57.013183   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:57.013183   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:57 GMT
	I0203 12:27:57.013183   13136 round_trippers.go:580]     Audit-Id: 16b26188-4b38-443a-bc9c-65cae13df402
	I0203 12:27:57.013183   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:57.508490   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:57.508490   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:57.508490   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:57.508490   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:57.513854   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:57.513941   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:57.513941   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:57.513941   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:57 GMT
	I0203 12:27:57.513941   13136 round_trippers.go:580]     Audit-Id: 2c833c05-7ff3-4928-82a8-ce94cd51da6d
	I0203 12:27:57.513941   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:57.513941   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:57.513941   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:57.514188   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:57.514651   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:58.008783   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:58.009336   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:58.009412   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:58.009412   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:58.013395   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:58.013395   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:58.013395   13136 round_trippers.go:580]     Audit-Id: fd0d4521-dcd0-4e83-aef7-7320e7ae1452
	I0203 12:27:58.013395   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:58.013395   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:58.013395   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:58.013395   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:58.013395   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:58 GMT
	I0203 12:27:58.013944   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:58.508368   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:58.508368   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:58.508368   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:58.508368   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:58.512765   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:58.512765   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:58.512868   13136 round_trippers.go:580]     Audit-Id: df361e64-1cf8-4191-ac50-2e1e8fd87c7a
	I0203 12:27:58.512868   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:58.512868   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:58.512868   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:58.512868   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:58.512868   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:58 GMT
	I0203 12:27:58.513446   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:59.009707   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:59.009707   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:59.009707   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:59.009707   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:59.013350   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:59.013677   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:59.013677   13136 round_trippers.go:580]     Audit-Id: 48798ddc-6609-4231-b642-709b6dad2dd0
	I0203 12:27:59.013677   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:59.013677   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:59.013677   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:59.013677   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:59.013677   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:59 GMT
	I0203 12:27:59.014107   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:59.508318   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:59.508318   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:59.508318   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:59.508318   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:59.512290   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:59.513068   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:59.513068   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:59.513068   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:59.513068   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:59.513068   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:59 GMT
	I0203 12:27:59.513068   13136 round_trippers.go:580]     Audit-Id: 0ca1f275-dd98-491b-a84f-7572cab5c452
	I0203 12:27:59.513068   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:59.513553   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:00.008521   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:00.008521   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:00.008521   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:00.008521   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:00.012064   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:28:00.012130   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:00.012130   13136 round_trippers.go:580]     Audit-Id: 3d319fe3-e65a-4fbe-8c38-3008d12152e7
	I0203 12:28:00.012130   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:00.012130   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:00.012130   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:00.012130   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:00.012130   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:00 GMT
	I0203 12:28:00.012367   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:00.012549   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:28:00.509392   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:00.509392   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:00.509392   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:00.509392   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:00.512755   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:00.512856   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:00.512856   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:00 GMT
	I0203 12:28:00.512856   13136 round_trippers.go:580]     Audit-Id: 3ea11098-c1b8-4eab-b9a4-3d0d4dbb90aa
	I0203 12:28:00.512856   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:00.512856   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:00.512856   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:00.512856   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:00.512950   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:01.008682   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:01.008682   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:01.008682   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:01.008682   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:01.016352   13136 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 12:28:01.016352   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:01.016352   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:01 GMT
	I0203 12:28:01.016352   13136 round_trippers.go:580]     Audit-Id: 8a84e20e-dc82-47d5-9d5e-0905557c1514
	I0203 12:28:01.016352   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:01.016352   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:01.016352   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:01.016352   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:01.016352   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:01.508721   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:01.509252   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:01.509252   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:01.509252   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:01.516730   13136 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 12:28:01.516784   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:01.516784   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:01 GMT
	I0203 12:28:01.516784   13136 round_trippers.go:580]     Audit-Id: 3ea996e0-e3eb-4bc1-a92d-3bf54b479449
	I0203 12:28:01.516784   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:01.516784   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:01.516784   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:01.516784   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:01.516977   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:02.009018   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:02.009018   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:02.009018   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:02.009098   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:02.013337   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:02.013337   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:02.013337   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:02 GMT
	I0203 12:28:02.013337   13136 round_trippers.go:580]     Audit-Id: 9e5e229e-d9db-4a1c-955d-f3cd5b0af3a3
	I0203 12:28:02.013460   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:02.013460   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:02.013460   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:02.013460   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:02.013816   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:02.014475   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:28:02.509448   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:02.509641   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:02.509641   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:02.509641   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:02.513259   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:02.513479   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:02.513479   13136 round_trippers.go:580]     Audit-Id: 5bc50d86-1c36-4cc7-8ef2-08c22c2908c8
	I0203 12:28:02.513479   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:02.513479   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:02.513479   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:02.513479   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:02.513552   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:02 GMT
	I0203 12:28:02.514260   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:03.008816   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:03.008816   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:03.008816   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:03.008816   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:03.012748   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:03.012748   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:03.012748   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:03.012748   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:03.012748   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:03.012748   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:03 GMT
	I0203 12:28:03.012748   13136 round_trippers.go:580]     Audit-Id: dbb06c18-bb02-46df-b609-ce147006b383
	I0203 12:28:03.012748   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:03.012748   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:03.508798   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:03.508798   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:03.508798   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:03.508798   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:03.513589   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:03.513589   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:03.513708   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:03.513708   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:03 GMT
	I0203 12:28:03.513708   13136 round_trippers.go:580]     Audit-Id: 3b3a7ba5-ef96-42d0-85a6-7962c4ee09be
	I0203 12:28:03.513708   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:03.513708   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:03.513708   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:03.514155   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:04.008842   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:04.008842   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:04.008842   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:04.008842   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:04.013812   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:04.013933   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:04.013933   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:04.013933   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:04.013933   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:04.013933   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:04 GMT
	I0203 12:28:04.013933   13136 round_trippers.go:580]     Audit-Id: a4d8bead-83c5-44a7-8177-28f09a06eef6
	I0203 12:28:04.013933   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:04.014132   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:04.014669   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:28:04.509026   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:04.509026   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:04.509026   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:04.509026   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:04.513040   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:04.513040   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:04.513137   13136 round_trippers.go:580]     Audit-Id: d38da522-9d3a-4b0a-a485-9bf6c1caa63e
	I0203 12:28:04.513137   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:04.513137   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:04.513137   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:04.513137   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:04.513137   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:04 GMT
	I0203 12:28:04.513439   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:05.008147   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:05.008147   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:05.008147   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:05.008147   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:05.012272   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:05.012272   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:05.012272   13136 round_trippers.go:580]     Audit-Id: eb14a766-b801-4164-87c1-418fd6ff7dc1
	I0203 12:28:05.012272   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:05.012272   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:05.012272   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:05.012272   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:05.012272   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:05 GMT
	I0203 12:28:05.012272   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:05.508453   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:05.508453   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:05.508453   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:05.508453   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:05.514452   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:05.514452   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:05.514452   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:05 GMT
	I0203 12:28:05.514452   13136 round_trippers.go:580]     Audit-Id: a0e10902-ac7c-459a-b40a-d00f51a0aed4
	I0203 12:28:05.514452   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:05.514452   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:05.514452   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:05.514452   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:05.515054   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:06.008978   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:06.008978   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:06.008978   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:06.008978   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:06.012693   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:06.012765   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:06.012765   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:06.012765   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:06.012765   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:06.012765   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:06.012765   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:06 GMT
	I0203 12:28:06.012765   13136 round_trippers.go:580]     Audit-Id: f934ecd8-a45b-47d8-8f4b-dc2f2ee95d99
	I0203 12:28:06.012993   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:06.508408   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:06.508408   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:06.508408   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:06.508408   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:06.513025   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:06.513103   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:06.513103   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:06 GMT
	I0203 12:28:06.513103   13136 round_trippers.go:580]     Audit-Id: 45d0fb9a-d203-4768-854e-347f31d5e48c
	I0203 12:28:06.513103   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:06.513103   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:06.513103   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:06.513103   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:06.513232   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:06.513770   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:28:07.008916   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:07.008916   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:07.008916   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:07.008916   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:07.012625   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:07.012625   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:07.012625   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:07.012625   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:07 GMT
	I0203 12:28:07.012625   13136 round_trippers.go:580]     Audit-Id: 8a9764e0-0561-4936-9a0d-f576b572237b
	I0203 12:28:07.012625   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:07.012625   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:07.012625   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:07.012625   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:07.509113   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:07.509113   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:07.509113   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:07.509113   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:07.513408   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:07.513408   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:07.513408   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:07.513408   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:07.513408   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:07 GMT
	I0203 12:28:07.513408   13136 round_trippers.go:580]     Audit-Id: e9888c70-1ffa-4ef8-8bd6-69434f50eb3e
	I0203 12:28:07.513408   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:07.513408   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:07.513780   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:08.009046   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:08.009046   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:08.009046   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:08.009046   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:08.016662   13136 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 12:28:08.016725   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:08.016725   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:08.016725   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:08 GMT
	I0203 12:28:08.016725   13136 round_trippers.go:580]     Audit-Id: b006ece1-a3fc-47c4-b0b5-80714b815fdc
	I0203 12:28:08.016725   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:08.016725   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:08.016784   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:08.016952   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:08.509256   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:08.509256   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:08.509256   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:08.509256   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:08.514291   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:08.514291   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:08.514291   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:08.514291   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:08 GMT
	I0203 12:28:08.514291   13136 round_trippers.go:580]     Audit-Id: a0db5066-8e01-4126-8b70-7049932981b3
	I0203 12:28:08.514291   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:08.514291   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:08.514291   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:08.514291   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:08.515156   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:28:09.009559   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:09.009559   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:09.009625   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:09.009625   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:09.013726   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:09.013726   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:09.013726   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:09.013726   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:09.013726   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:09.013726   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:09 GMT
	I0203 12:28:09.013726   13136 round_trippers.go:580]     Audit-Id: 4105e73f-ae7c-4edb-bd90-50f9b5b24467
	I0203 12:28:09.013726   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:09.014033   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:09.508332   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:09.508332   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:09.508332   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:09.508332   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:09.512925   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:09.512925   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:09.512925   13136 round_trippers.go:580]     Audit-Id: f8f2ffd7-2556-485c-bd38-6664c2e84e5c
	I0203 12:28:09.512925   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:09.512925   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:09.512925   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:09.512925   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:09.512925   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:09 GMT
	I0203 12:28:09.512925   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:10.008452   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:10.008452   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:10.008452   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:10.008452   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:10.016901   13136 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 12:28:10.016901   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:10.016901   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:10.016901   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:10.016983   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:10 GMT
	I0203 12:28:10.016983   13136 round_trippers.go:580]     Audit-Id: f0fb5700-49f5-4aa2-bd4f-461847d58a5a
	I0203 12:28:10.016983   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:10.016983   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:10.017151   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1914","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5453 chars]
	I0203 12:28:10.017151   13136 node_ready.go:49] node "multinode-749300" has status "Ready":"True"
	I0203 12:28:10.017151   13136 node_ready.go:38] duration metric: took 39.5089814s for node "multinode-749300" to be "Ready" ...
	I0203 12:28:10.017151   13136 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 12:28:10.017151   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods
	I0203 12:28:10.017151   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:10.017151   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:10.017151   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:10.033173   13136 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0203 12:28:10.033771   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:10.033771   13136 round_trippers.go:580]     Audit-Id: 578a2d3a-8189-4eeb-b517-94366c7e6b76
	I0203 12:28:10.033771   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:10.033771   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:10.033771   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:10.033771   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:10.033771   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:10 GMT
	I0203 12:28:10.036124   13136 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1914"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90298 chars]
	I0203 12:28:10.039985   13136 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:10.040142   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:10.040142   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:10.040142   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:10.040142   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:10.051703   13136 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0203 12:28:10.051703   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:10.051703   13136 round_trippers.go:580]     Audit-Id: 18fc8a36-48a8-4a16-9d76-5bc300577d64
	I0203 12:28:10.051703   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:10.051703   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:10.051703   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:10.051703   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:10.051703   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:10 GMT
	I0203 12:28:10.051703   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:10.052411   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:10.052411   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:10.052411   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:10.052411   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:10.057066   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:10.057066   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:10.057066   13136 round_trippers.go:580]     Audit-Id: 7b49eeac-6ef1-4d6f-9637-6a15d104e5d2
	I0203 12:28:10.057066   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:10.057066   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:10.057066   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:10.057066   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:10.057066   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:10 GMT
	I0203 12:28:10.057066   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1915","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0203 12:28:10.541122   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:10.541122   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:10.541122   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:10.541122   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:10.545725   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:10.545725   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:10.545725   13136 round_trippers.go:580]     Audit-Id: eef1aa6c-b8e6-4ece-a267-abc65db4c707
	I0203 12:28:10.545725   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:10.545725   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:10.545725   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:10.545860   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:10.545860   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:10 GMT
	I0203 12:28:10.545993   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:10.546835   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:10.546835   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:10.546894   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:10.546894   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:10.550028   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:10.550100   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:10.550100   13136 round_trippers.go:580]     Audit-Id: abfcf48b-95ae-4b09-bded-b9ff118139c1
	I0203 12:28:10.550100   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:10.550100   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:10.550100   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:10.550100   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:10.550100   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:10 GMT
	I0203 12:28:10.550326   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1915","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0203 12:28:11.040666   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:11.040666   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:11.040666   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:11.040666   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:11.046640   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:11.046640   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:11.046640   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:11.046640   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:11.046640   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:11.046640   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:11 GMT
	I0203 12:28:11.046640   13136 round_trippers.go:580]     Audit-Id: 5977d234-6a07-4214-b5b4-a72ff0160ab4
	I0203 12:28:11.046640   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:11.046640   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:11.046640   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:11.046640   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:11.046640   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:11.046640   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:11.050664   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:11.051265   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:11.051265   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:11.051339   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:11 GMT
	I0203 12:28:11.051339   13136 round_trippers.go:580]     Audit-Id: 73dce08f-6c23-4f2b-98ef-8f9f3a58b585
	I0203 12:28:11.051339   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:11.051339   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:11.051339   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:11.051597   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1915","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0203 12:28:11.540218   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:11.540218   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:11.540218   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:11.540218   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:11.545099   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:11.545099   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:11.545099   13136 round_trippers.go:580]     Audit-Id: 2591d6f2-092e-4c5d-921a-47e71177e964
	I0203 12:28:11.545099   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:11.545099   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:11.545099   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:11.545099   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:11.545099   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:11 GMT
	I0203 12:28:11.545301   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:11.546115   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:11.546115   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:11.546208   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:11.546208   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:11.551295   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:11.551295   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:11.551295   13136 round_trippers.go:580]     Audit-Id: 9d922085-bb5d-4e88-93a3-0bd4313b3b7a
	I0203 12:28:11.551295   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:11.551295   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:11.551295   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:11.551295   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:11.551295   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:11 GMT
	I0203 12:28:11.551999   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1915","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0203 12:28:12.040749   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:12.040749   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:12.040749   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:12.040749   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:12.045362   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:12.045460   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:12.045460   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:12.045460   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:12.045460   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:12.045460   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:12 GMT
	I0203 12:28:12.045544   13136 round_trippers.go:580]     Audit-Id: 38b72ff7-0e27-4254-b600-43fa6e99f48e
	I0203 12:28:12.045544   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:12.045637   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:12.046333   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:12.046333   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:12.046408   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:12.046408   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:12.049624   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:12.049624   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:12.049624   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:12.049624   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:12.049624   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:12.049624   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:12.049624   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:12 GMT
	I0203 12:28:12.049624   13136 round_trippers.go:580]     Audit-Id: fcf99cde-84e0-4a84-90f7-7eea4754a2f4
	I0203 12:28:12.050750   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1915","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0203 12:28:12.050993   13136 pod_ready.go:103] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"False"
	I0203 12:28:12.541137   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:12.541137   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:12.541137   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:12.541137   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:12.545199   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:12.545199   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:12.545315   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:12 GMT
	I0203 12:28:12.545315   13136 round_trippers.go:580]     Audit-Id: 5589ede2-d2a7-4fe2-9280-63e2910827de
	I0203 12:28:12.545315   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:12.545315   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:12.545315   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:12.545315   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:12.545686   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:12.546465   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:12.546465   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:12.546465   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:12.546465   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:12.549370   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:28:12.549446   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:12.549446   13136 round_trippers.go:580]     Audit-Id: a268ddb5-0b4b-439d-a1e2-1f0479dce27f
	I0203 12:28:12.549512   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:12.549512   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:12.549512   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:12.549512   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:12.549512   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:12 GMT
	I0203 12:28:12.549609   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1915","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0203 12:28:13.041165   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:13.041165   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:13.041165   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:13.041165   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:13.045069   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:13.045069   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:13.045069   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:13 GMT
	I0203 12:28:13.045069   13136 round_trippers.go:580]     Audit-Id: aed1876c-6afc-4567-9c94-9fb50cdb0899
	I0203 12:28:13.045169   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:13.045169   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:13.045169   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:13.045169   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:13.045537   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:13.046332   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:13.046410   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:13.046410   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:13.046410   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:13.052370   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:13.052370   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:13.052370   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:13 GMT
	I0203 12:28:13.052370   13136 round_trippers.go:580]     Audit-Id: 5e772ad2-b269-43aa-b341-2df237d7687a
	I0203 12:28:13.052370   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:13.052370   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:13.052370   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:13.052370   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:13.052370   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1915","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0203 12:28:13.541681   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:13.541681   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:13.541681   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:13.541681   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:13.546472   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:13.546472   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:13.546472   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:13.546472   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:13.546472   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:13 GMT
	I0203 12:28:13.546472   13136 round_trippers.go:580]     Audit-Id: b8ce2091-75b5-437c-acaf-7c5a90a4052c
	I0203 12:28:13.546472   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:13.546472   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:13.546472   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:13.547144   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:13.547144   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:13.547750   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:13.547855   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:13.551646   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:13.551734   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:13.551734   13136 round_trippers.go:580]     Audit-Id: bcebd6b1-c269-4209-8354-155c6236e811
	I0203 12:28:13.551734   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:13.551734   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:13.551734   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:13.551734   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:13.551734   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:13 GMT
	I0203 12:28:13.552016   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:14.041243   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:14.041243   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:14.041243   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:14.041243   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:14.045676   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:14.045676   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:14.045676   13136 round_trippers.go:580]     Audit-Id: 5e278304-f037-458c-a7c2-34385dd97a3a
	I0203 12:28:14.045771   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:14.045771   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:14.045771   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:14.045771   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:14.045771   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:14 GMT
	I0203 12:28:14.045853   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:14.046628   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:14.046628   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:14.046628   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:14.046628   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:14.052731   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:28:14.052731   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:14.052731   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:14.052731   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:14.052731   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:14.052731   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:14 GMT
	I0203 12:28:14.052731   13136 round_trippers.go:580]     Audit-Id: a0527d73-57c3-40f0-bc56-60c7515c736f
	I0203 12:28:14.052731   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:14.052731   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:14.053511   13136 pod_ready.go:103] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"False"
	I0203 12:28:14.541564   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:14.541650   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:14.541650   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:14.541650   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:14.545866   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:14.545866   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:14.545866   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:14.545866   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:14.545866   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:14.545866   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:14.545866   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:14 GMT
	I0203 12:28:14.545866   13136 round_trippers.go:580]     Audit-Id: f67eccfb-e648-4ab3-bed7-428a6eb02617
	I0203 12:28:14.545866   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:14.546902   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:14.546967   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:14.546967   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:14.546967   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:14.549662   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:28:14.549662   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:14.549662   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:14.549662   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:14 GMT
	I0203 12:28:14.549662   13136 round_trippers.go:580]     Audit-Id: 479c102c-a461-45ac-a960-ca3a65f55337
	I0203 12:28:14.549662   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:14.549662   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:14.549662   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:14.549937   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:15.040321   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:15.040321   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:15.040321   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:15.040321   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:15.044873   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:15.044970   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:15.044970   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:15.044970   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:15.044970   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:15.044970   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:15 GMT
	I0203 12:28:15.045050   13136 round_trippers.go:580]     Audit-Id: 278cd36f-737d-439c-9196-c4bc9859a2d4
	I0203 12:28:15.045076   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:15.045285   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:15.046138   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:15.046138   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:15.046138   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:15.046201   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:15.048911   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:28:15.048911   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:15.048911   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:15.048911   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:15.048911   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:15.049430   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:15 GMT
	I0203 12:28:15.049430   13136 round_trippers.go:580]     Audit-Id: 875d38ec-68c3-429d-9b30-69ecc9185cfe
	I0203 12:28:15.049430   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:15.049605   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:15.540721   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:15.540721   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:15.540721   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:15.540721   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:15.544069   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:15.544153   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:15.544153   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:15 GMT
	I0203 12:28:15.544153   13136 round_trippers.go:580]     Audit-Id: bcd97cc3-9ad0-47f9-89bc-14f64eb4c1c0
	I0203 12:28:15.544153   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:15.544153   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:15.544153   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:15.544153   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:15.544392   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:15.544724   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:15.544724   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:15.544724   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:15.544724   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:15.548033   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:15.548033   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:15.548243   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:15.548243   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:15.548243   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:15.548243   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:15.548243   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:15 GMT
	I0203 12:28:15.548243   13136 round_trippers.go:580]     Audit-Id: 5c1228d9-749f-442f-9654-6ce0a5fec451
	I0203 12:28:15.548530   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:16.042899   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:16.042973   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:16.042973   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:16.042973   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:16.046773   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:16.046835   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:16.046835   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:16.046835   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:16.046835   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:16.046835   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:16 GMT
	I0203 12:28:16.046835   13136 round_trippers.go:580]     Audit-Id: b1a26ab1-4137-438a-b921-1e93efd74aaa
	I0203 12:28:16.046835   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:16.046998   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:16.047833   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:16.047833   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:16.047909   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:16.047909   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:16.051891   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:16.051978   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:16.051978   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:16.051978   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:16.051978   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:16 GMT
	I0203 12:28:16.051978   13136 round_trippers.go:580]     Audit-Id: 391bac31-3be8-43a6-ade4-19f865d07a19
	I0203 12:28:16.051978   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:16.051978   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:16.052185   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:16.540864   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:16.541032   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:16.541032   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:16.541032   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:16.545804   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:16.545880   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:16.545880   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:16.545880   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:16.545880   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:16.545880   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:16 GMT
	I0203 12:28:16.545880   13136 round_trippers.go:580]     Audit-Id: fb52dfe4-6767-4322-b3b9-6f18d560609a
	I0203 12:28:16.545880   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:16.546041   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:16.546930   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:16.546930   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:16.546930   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:16.546930   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:16.550233   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:16.550233   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:16.550233   13136 round_trippers.go:580]     Audit-Id: 77268df5-8715-4d25-8c37-a10ab16cae48
	I0203 12:28:16.550787   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:16.550787   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:16.550787   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:16.550787   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:16.550787   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:16 GMT
	I0203 12:28:16.550963   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:16.551362   13136 pod_ready.go:103] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"False"
	I0203 12:28:17.040611   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:17.040611   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:17.040611   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:17.040611   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:17.045018   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:17.045103   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:17.045103   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:17.045103   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:17.045103   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:17 GMT
	I0203 12:28:17.045103   13136 round_trippers.go:580]     Audit-Id: e4e5bc4e-6b5d-4ee6-a052-845e400862bb
	I0203 12:28:17.045103   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:17.045103   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:17.045103   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:17.046263   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:17.046337   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:17.046337   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:17.046337   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:17.049411   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:17.049411   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:17.049411   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:17.049411   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:17.049411   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:17 GMT
	I0203 12:28:17.049411   13136 round_trippers.go:580]     Audit-Id: ecddf643-e2f3-45a0-a123-dffba90d6c81
	I0203 12:28:17.049411   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:17.049411   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:17.049411   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:17.541764   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:17.541764   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:17.541764   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:17.541764   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:17.546089   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:17.546089   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:17.546089   13136 round_trippers.go:580]     Audit-Id: a6922501-d2cd-4a6b-a190-4faa31cdc2b5
	I0203 12:28:17.546089   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:17.546089   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:17.546089   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:17.546089   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:17.546089   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:17 GMT
	I0203 12:28:17.546484   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:17.547122   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:17.547122   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:17.547122   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:17.547122   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:17.549400   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:28:17.550416   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:17.550416   13136 round_trippers.go:580]     Audit-Id: 2d030636-2e10-4ce2-8e40-299915aa0f09
	I0203 12:28:17.550416   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:17.550468   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:17.550468   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:17.550468   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:17.550468   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:17 GMT
	I0203 12:28:17.550673   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:18.040839   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:18.040839   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:18.040839   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:18.040839   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:18.045193   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:18.045193   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:18.045193   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:18 GMT
	I0203 12:28:18.045193   13136 round_trippers.go:580]     Audit-Id: d1d1abe1-c722-4074-8eb1-a8bb625b4322
	I0203 12:28:18.045193   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:18.045193   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:18.045193   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:18.045193   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:18.045193   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:18.046482   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:18.046482   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:18.046482   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:18.046482   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:18.053246   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:28:18.053246   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:18.053246   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:18.053246   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:18 GMT
	I0203 12:28:18.053246   13136 round_trippers.go:580]     Audit-Id: f52501e7-69b7-4159-9cb3-d67f75fc8eaf
	I0203 12:28:18.053246   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:18.053246   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:18.053246   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:18.053246   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:18.540600   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:18.540600   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:18.540600   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:18.540600   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:18.544982   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:18.545096   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:18.545096   13136 round_trippers.go:580]     Audit-Id: 2c0687a9-4575-4e6f-b713-e100a94e6b86
	I0203 12:28:18.545096   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:18.545096   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:18.545096   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:18.545096   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:18.545096   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:18 GMT
	I0203 12:28:18.545341   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:18.546103   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:18.546103   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:18.546103   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:18.546103   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:18.549508   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:18.549508   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:18.549508   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:18.549508   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:18.549508   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:18.549508   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:18 GMT
	I0203 12:28:18.549508   13136 round_trippers.go:580]     Audit-Id: 783707b3-d0da-4dfe-881a-66d0f6996fb8
	I0203 12:28:18.549508   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:18.549780   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:19.041163   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:19.041163   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:19.041163   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:19.041163   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:19.053526   13136 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0203 12:28:19.054607   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:19.054607   13136 round_trippers.go:580]     Audit-Id: 4aac503e-c093-44a6-94b5-807d166d2911
	I0203 12:28:19.054607   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:19.054607   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:19.054649   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:19.054649   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:19.054649   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:19 GMT
	I0203 12:28:19.054864   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:19.055645   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:19.055645   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:19.055722   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:19.055722   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:19.060355   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:19.060455   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:19.060455   13136 round_trippers.go:580]     Audit-Id: 2078f208-c2d9-4a6b-85e1-b14ef2e700de
	I0203 12:28:19.060455   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:19.060455   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:19.060455   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:19.060455   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:19.060455   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:19 GMT
	I0203 12:28:19.062159   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:19.062159   13136 pod_ready.go:103] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"False"
	I0203 12:28:19.540499   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:19.540499   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:19.540499   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:19.540499   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:19.544496   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:19.544569   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:19.544569   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:19 GMT
	I0203 12:28:19.544569   13136 round_trippers.go:580]     Audit-Id: 0ed79164-db5f-4c00-bcc2-16a65c377f23
	I0203 12:28:19.544569   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:19.544569   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:19.544569   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:19.544569   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:19.544569   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:19.545585   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:19.545585   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:19.545659   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:19.545659   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:19.548854   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:19.548854   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:19.549236   13136 round_trippers.go:580]     Audit-Id: 764f2081-fbdd-454f-a9ca-1639696960ce
	I0203 12:28:19.549236   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:19.549236   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:19.549236   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:19.549236   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:19.549236   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:19 GMT
	I0203 12:28:19.549492   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:20.040190   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:20.040190   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:20.040190   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:20.040190   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:20.045282   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:20.045282   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:20.045282   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:20 GMT
	I0203 12:28:20.045282   13136 round_trippers.go:580]     Audit-Id: e77de3d8-c28c-468c-b1de-5ee1c12431b7
	I0203 12:28:20.045282   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:20.045282   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:20.045282   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:20.045282   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:20.045510   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:20.046261   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:20.046261   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:20.046261   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:20.046261   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:20.052729   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:28:20.052729   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:20.052729   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:20 GMT
	I0203 12:28:20.052729   13136 round_trippers.go:580]     Audit-Id: b75bea0d-0f48-4f32-a8f2-84115b0930f2
	I0203 12:28:20.052729   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:20.052729   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:20.052729   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:20.052729   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:20.052729   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:20.540870   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:20.540956   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:20.540956   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:20.540956   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:20.544859   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:20.544959   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:20.544959   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:20.544959   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:20 GMT
	I0203 12:28:20.544959   13136 round_trippers.go:580]     Audit-Id: 26b4faed-1693-4877-85f3-c2a5660cfbbb
	I0203 12:28:20.545024   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:20.545024   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:20.545024   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:20.545212   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:20.545906   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:20.545968   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:20.545968   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:20.545968   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:20.548725   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:28:20.548725   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:20.548725   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:20.548725   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:20.548725   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:20.548725   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:20.548725   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:20 GMT
	I0203 12:28:20.548725   13136 round_trippers.go:580]     Audit-Id: 7ba94174-41f2-4298-bdfb-58c3689dd7cf
	I0203 12:28:20.548725   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:21.041105   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:21.041105   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:21.041105   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:21.041105   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:21.044704   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:21.045362   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:21.045362   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:21.045362   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:21.045362   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:21 GMT
	I0203 12:28:21.045362   13136 round_trippers.go:580]     Audit-Id: 5f75181c-1b8a-422a-b477-0acc7d466358
	I0203 12:28:21.045472   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:21.045472   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:21.045754   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:21.046551   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:21.046551   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:21.046629   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:21.046629   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:21.049992   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:21.050054   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:21.050054   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:21.050054   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:21.050118   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:21.050118   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:21.050118   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:21 GMT
	I0203 12:28:21.050118   13136 round_trippers.go:580]     Audit-Id: 5afbc021-fb0a-4ba8-afc5-03929647065c
	I0203 12:28:21.050389   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:21.541160   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:21.541160   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:21.541160   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:21.541160   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:21.545378   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:21.545378   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:21.545378   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:21.545378   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:21.545378   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:21 GMT
	I0203 12:28:21.545378   13136 round_trippers.go:580]     Audit-Id: c997fa66-d716-46e4-84ac-0eef6bf2319b
	I0203 12:28:21.545378   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:21.545378   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:21.545378   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:21.546487   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:21.546560   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:21.546560   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:21.546560   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:21.549971   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:21.549971   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:21.549971   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:21.549971   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:21.549971   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:21.549971   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:21 GMT
	I0203 12:28:21.549971   13136 round_trippers.go:580]     Audit-Id: 49184a45-d677-4ef3-9aee-e5173a0cf69b
	I0203 12:28:21.549971   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:21.549971   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:21.550624   13136 pod_ready.go:103] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"False"
	I0203 12:28:22.040778   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:22.040778   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:22.040778   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:22.040778   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:22.045427   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:22.045427   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:22.045427   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:22 GMT
	I0203 12:28:22.045427   13136 round_trippers.go:580]     Audit-Id: 0b46e9db-563f-4db0-9a8a-f445b0a97553
	I0203 12:28:22.045427   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:22.045427   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:22.045427   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:22.045427   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:22.045754   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:22.046470   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:22.046539   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:22.046539   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:22.046539   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:22.052327   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:22.052327   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:22.052327   13136 round_trippers.go:580]     Audit-Id: 29856910-9e37-4971-8d4a-cb80fa46082c
	I0203 12:28:22.052327   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:22.052327   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:22.052327   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:22.052327   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:22.052327   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:22 GMT
	I0203 12:28:22.052327   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:22.541584   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:22.541683   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:22.541683   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:22.541683   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:22.546220   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:22.546220   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:22.546220   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:22.546220   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:22.546220   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:22 GMT
	I0203 12:28:22.546220   13136 round_trippers.go:580]     Audit-Id: 58f71b24-d31e-4ca8-86a9-77e4d771a8fb
	I0203 12:28:22.546220   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:22.546220   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:22.546220   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:22.547364   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:22.547364   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:22.547364   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:22.547443   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:22.550518   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:22.550518   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:22.550518   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:22.550518   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:22.550518   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:22 GMT
	I0203 12:28:22.550518   13136 round_trippers.go:580]     Audit-Id: 6cb59054-21ec-4e1a-a58c-0629e260d7da
	I0203 12:28:22.550518   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:22.550518   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:22.550760   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:23.040538   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:23.040538   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:23.040538   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:23.040538   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:23.045302   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:23.045302   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:23.045302   13136 round_trippers.go:580]     Audit-Id: b9532050-4db0-4aab-89d2-b13890a9ce6f
	I0203 12:28:23.045397   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:23.045397   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:23.045397   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:23.045397   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:23.045397   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:23 GMT
	I0203 12:28:23.045602   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:23.046356   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:23.046356   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:23.046427   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:23.046427   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:23.049751   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:23.049842   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:23.049842   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:23.049842   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:23.049842   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:23.049842   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:23.049908   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:23 GMT
	I0203 12:28:23.049908   13136 round_trippers.go:580]     Audit-Id: fe40e04a-71a4-486d-a16f-82f6db65bd08
	I0203 12:28:23.050032   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:23.540432   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:23.540432   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:23.540432   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:23.540432   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:23.545089   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:23.545197   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:23.545197   13136 round_trippers.go:580]     Audit-Id: b1ca1781-a1cf-4772-b642-d07702dd8dac
	I0203 12:28:23.545197   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:23.545197   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:23.545197   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:23.545197   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:23.545197   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:23 GMT
	I0203 12:28:23.545482   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:23.546263   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:23.546337   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:23.546337   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:23.546337   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:23.549265   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:28:23.549265   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:23.549265   13136 round_trippers.go:580]     Audit-Id: 8435724e-68b5-4416-a75e-093953587d5a
	I0203 12:28:23.549265   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:23.549265   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:23.549265   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:23.549265   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:23.549265   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:23 GMT
	I0203 12:28:23.549465   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:24.041290   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:24.041290   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:24.041362   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:24.041362   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:24.045846   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:24.045846   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:24.045846   13136 round_trippers.go:580]     Audit-Id: 24c17998-e5de-4588-bb6e-9cc7203809b4
	I0203 12:28:24.045846   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:24.045846   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:24.045846   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:24.045846   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:24.045846   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:24 GMT
	I0203 12:28:24.045846   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:24.046967   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:24.046967   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:24.047034   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:24.047034   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:24.050068   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:24.050068   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:24.050068   13136 round_trippers.go:580]     Audit-Id: 53447755-5f83-455d-8a8a-f1a96b79bd22
	I0203 12:28:24.050068   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:24.050068   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:24.050068   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:24.050068   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:24.050068   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:24 GMT
	I0203 12:28:24.050354   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:24.050498   13136 pod_ready.go:103] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"False"
	I0203 12:28:24.541547   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:24.541547   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:24.541769   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:24.541769   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:24.548539   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:28:24.548539   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:24.548539   13136 round_trippers.go:580]     Audit-Id: 71e0bc1c-3925-4f8a-a728-c2b0b162b1b6
	I0203 12:28:24.548539   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:24.548539   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:24.548539   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:24.548539   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:24.548539   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:24 GMT
	I0203 12:28:24.548539   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:24.550624   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:24.550624   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:24.550624   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:24.550687   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:24.553621   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:28:24.553704   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:24.553704   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:24.553704   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:24.553704   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:24.553704   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:24.553777   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:24 GMT
	I0203 12:28:24.553777   13136 round_trippers.go:580]     Audit-Id: e540aec6-22e9-4629-a09f-e6360e52b561
	I0203 12:28:24.553902   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:25.040909   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:25.040909   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:25.040909   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:25.040909   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:25.044580   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:25.045263   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:25.045263   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:25.045263   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:25.045263   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:25 GMT
	I0203 12:28:25.045263   13136 round_trippers.go:580]     Audit-Id: d03f7ca4-7aef-4d2b-bc93-7493424fcf07
	I0203 12:28:25.045263   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:25.045263   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:25.045424   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:25.046162   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:25.046267   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:25.046267   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:25.046267   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:25.049318   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:25.049318   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:25.049318   13136 round_trippers.go:580]     Audit-Id: c52fde3a-2f7d-44c3-8f1d-27257dfd3e25
	I0203 12:28:25.049318   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:25.049318   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:25.049318   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:25.049318   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:25.049318   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:25 GMT
	I0203 12:28:25.049593   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:25.540423   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:25.540423   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:25.540423   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:25.540423   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:25.547830   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:28:25.547830   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:25.547830   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:25.547830   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:25.547830   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:25 GMT
	I0203 12:28:25.547830   13136 round_trippers.go:580]     Audit-Id: 38be7af3-52d8-4586-bc8b-0c1899124850
	I0203 12:28:25.547830   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:25.547830   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:25.547830   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:25.548800   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:25.548800   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:25.548800   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:25.548800   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:25.551806   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:25.551806   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:25.551806   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:25 GMT
	I0203 12:28:25.551806   13136 round_trippers.go:580]     Audit-Id: 8af2c4dd-a45c-4857-9907-5fa412fb17be
	I0203 12:28:25.551806   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:25.551806   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:25.551806   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:25.551806   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:25.551806   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:26.041850   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:26.042217   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:26.042217   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:26.042217   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:26.046311   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:26.046383   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:26.046383   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:26.046383   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:26.046446   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:26.046446   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:26.046446   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:26 GMT
	I0203 12:28:26.046446   13136 round_trippers.go:580]     Audit-Id: 9bddf111-6cd8-495e-a423-fe83de63ff2f
	I0203 12:28:26.046667   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:26.047647   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:26.047647   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:26.047647   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:26.047647   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:26.054419   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:28:26.054419   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:26.054419   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:26.054419   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:26.054419   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:26 GMT
	I0203 12:28:26.054419   13136 round_trippers.go:580]     Audit-Id: 00380662-796a-4b6a-b2da-be31e8897d04
	I0203 12:28:26.054419   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:26.054733   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:26.055236   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:26.055236   13136 pod_ready.go:103] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"False"
	I0203 12:28:26.542065   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:26.542065   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:26.542155   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:26.542155   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:26.545944   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:26.545944   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:26.546013   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:26 GMT
	I0203 12:28:26.546013   13136 round_trippers.go:580]     Audit-Id: 68c01edc-96e4-489d-8ca8-d80c71cc8695
	I0203 12:28:26.546013   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:26.546013   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:26.546013   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:26.546013   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:26.546013   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:26.546610   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:26.546610   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:26.547132   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:26.547132   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:26.549850   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:28:26.550237   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:26.550237   13136 round_trippers.go:580]     Audit-Id: f46a0bc6-ea68-4924-ae7d-a4660c473bbb
	I0203 12:28:26.550237   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:26.550237   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:26.550237   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:26.550237   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:26.550237   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:26 GMT
	I0203 12:28:26.550532   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:27.041031   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:27.041031   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:27.041031   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:27.041031   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:27.046151   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:27.046151   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:27.046151   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:27.046151   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:27.046151   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:27.046151   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:27 GMT
	I0203 12:28:27.046151   13136 round_trippers.go:580]     Audit-Id: 80e1590c-2df2-465c-8b41-ed40274c71bb
	I0203 12:28:27.046151   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:27.046151   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:27.047674   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:27.047674   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:27.047674   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:27.047674   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:27.050901   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:27.050993   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:27.050993   13136 round_trippers.go:580]     Audit-Id: a169fb66-60a6-481b-a733-503aef41116c
	I0203 12:28:27.050993   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:27.050993   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:27.050993   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:27.050993   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:27.050993   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:27 GMT
	I0203 12:28:27.050993   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:27.540672   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:27.540672   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:27.540672   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:27.540672   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:27.544670   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:27.544670   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:27.544670   13136 round_trippers.go:580]     Audit-Id: 43823f7a-d3b8-4bf2-8cd0-e504795bc4fc
	I0203 12:28:27.544670   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:27.544670   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:27.544670   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:27.544670   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:27.544670   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:27 GMT
	I0203 12:28:27.544900   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:27.545738   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:27.545738   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:27.545738   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:27.545738   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:27.551454   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:27.551454   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:27.551454   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:27.551589   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:27.551589   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:27.551589   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:27 GMT
	I0203 12:28:27.551589   13136 round_trippers.go:580]     Audit-Id: 0c7ef70d-e7cc-4897-9105-f27a3c8a8989
	I0203 12:28:27.551589   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:27.551751   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:28.041494   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:28.041494   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:28.041494   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:28.041494   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:28.046423   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:28.046562   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:28.046562   13136 round_trippers.go:580]     Audit-Id: 4f68b248-d82c-4903-b801-08fe7d71b01c
	I0203 12:28:28.046562   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:28.046562   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:28.046562   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:28.046562   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:28.046562   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:28 GMT
	I0203 12:28:28.046896   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:28.047558   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:28.047622   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:28.047622   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:28.047622   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:28.053727   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:28:28.053727   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:28.053727   13136 round_trippers.go:580]     Audit-Id: 6d9412c0-cc6b-4a99-a079-f67562e8ece2
	I0203 12:28:28.053727   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:28.053727   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:28.053727   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:28.053727   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:28.053727   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:28 GMT
	I0203 12:28:28.053727   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:28.055343   13136 pod_ready.go:103] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"False"
	I0203 12:28:28.541204   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:28.541204   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:28.541204   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:28.541204   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:28.547050   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:28.547050   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:28.547050   13136 round_trippers.go:580]     Audit-Id: 0e7477c7-253d-429e-8480-b7d36eade537
	I0203 12:28:28.547050   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:28.547050   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:28.547150   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:28.547150   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:28.547150   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:28 GMT
	I0203 12:28:28.547353   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:28.548212   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:28.548212   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:28.548212   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:28.548212   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:28.554394   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:28.554394   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:28.554460   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:28.554460   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:28 GMT
	I0203 12:28:28.554460   13136 round_trippers.go:580]     Audit-Id: 07157447-09e8-4f06-bb37-d45f3f32fd1f
	I0203 12:28:28.554460   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:28.554460   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:28.554460   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:28.554644   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:29.040539   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:29.040539   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:29.040539   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:29.040539   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:29.044831   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:29.044831   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:29.044831   13136 round_trippers.go:580]     Audit-Id: bc4f0848-3d32-4b88-8ad4-f5c48561a259
	I0203 12:28:29.044831   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:29.044831   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:29.044831   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:29.044831   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:29.044831   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:29 GMT
	I0203 12:28:29.044831   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:29.045529   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:29.045529   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:29.045529   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:29.045529   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:29.052517   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:28:29.052609   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:29.052632   13136 round_trippers.go:580]     Audit-Id: 4e21d5f9-f741-4802-ae6e-f3674148bfb6
	I0203 12:28:29.052632   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:29.052632   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:29.052632   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:29.052632   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:29.052632   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:29 GMT
	I0203 12:28:29.052632   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:29.540880   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:29.540880   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:29.540880   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:29.540880   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:29.545684   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:29.545684   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:29.545824   13136 round_trippers.go:580]     Audit-Id: 8e746664-3bcd-43fc-b242-e4f1eab10540
	I0203 12:28:29.545824   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:29.545824   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:29.545824   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:29.545824   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:29.545824   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:29 GMT
	I0203 12:28:29.546026   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:29.546759   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:29.546759   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:29.546759   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:29.546759   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:29.551570   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:29.551570   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:29.551570   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:29.551570   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:29.551570   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:29 GMT
	I0203 12:28:29.551570   13136 round_trippers.go:580]     Audit-Id: 5846d48e-e7fb-4e41-a9ee-497091196550
	I0203 12:28:29.551570   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:29.551570   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:29.552543   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:30.040945   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:30.040945   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:30.040945   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:30.040945   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:30.045596   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:30.045724   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:30.045724   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:30.045724   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:30.045724   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:30 GMT
	I0203 12:28:30.045724   13136 round_trippers.go:580]     Audit-Id: 7ed00026-8a97-4216-b1e6-13905f28a2eb
	I0203 12:28:30.045724   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:30.045724   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:30.045869   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:30.046731   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:30.046792   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:30.046792   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:30.046792   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:30.051711   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:30.051711   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:30.051711   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:30 GMT
	I0203 12:28:30.051711   13136 round_trippers.go:580]     Audit-Id: 7a2db867-af9b-4c02-9722-a611eb83285f
	I0203 12:28:30.051711   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:30.051711   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:30.051711   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:30.051711   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:30.052025   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:30.541207   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:30.541207   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:30.541207   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:30.541207   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:30.545135   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:30.545135   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:30.545135   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:30.545249   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:30.545249   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:30.545249   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:30.545249   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:30 GMT
	I0203 12:28:30.545249   13136 round_trippers.go:580]     Audit-Id: 14c7e090-7216-44be-8e6c-da4f9cefa4ae
	I0203 12:28:30.545411   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:30.546100   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:30.546100   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:30.546100   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:30.546100   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:30.561611   13136 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0203 12:28:30.561611   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:30.561611   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:30.561682   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:30.561682   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:30 GMT
	I0203 12:28:30.561682   13136 round_trippers.go:580]     Audit-Id: df93a41a-4cbd-4cbf-aaf1-3ab1082d98c0
	I0203 12:28:30.561682   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:30.561682   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:30.561924   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:30.562353   13136 pod_ready.go:103] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"False"
	I0203 12:28:31.041246   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:31.041246   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.041246   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.041246   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.046328   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:31.046451   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.046451   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.046451   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.046451   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.046451   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.046451   13136 round_trippers.go:580]     Audit-Id: 6e469852-daa0-44d0-8fa7-52eeaf583d0c
	I0203 12:28:31.046535   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.046695   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:31.047444   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:31.047444   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.047444   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.047444   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.053049   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:31.053049   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.053049   13136 round_trippers.go:580]     Audit-Id: ed3c9a43-46af-45ec-bf35-e77cc27ad430
	I0203 12:28:31.053049   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.053049   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.053049   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.053049   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.053049   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.053049   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:31.541658   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:31.541658   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.541658   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.541658   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.561658   13136 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0203 12:28:31.561765   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.561765   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.561765   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.561765   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.561765   13136 round_trippers.go:580]     Audit-Id: 05e8bc17-d4fe-4490-b7d8-aed474b4d067
	I0203 12:28:31.561765   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.561765   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.561765   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1962","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7044 chars]
	I0203 12:28:31.562782   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:31.562782   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.562782   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.562782   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.571916   13136 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0203 12:28:31.571999   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.571999   13136 round_trippers.go:580]     Audit-Id: 283c7627-962f-4571-9be4-84291dc99169
	I0203 12:28:31.571999   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.571999   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.571999   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.571999   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.571999   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.572189   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:31.572591   13136 pod_ready.go:93] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"True"
	I0203 12:28:31.572661   13136 pod_ready.go:82] duration metric: took 21.5324352s for pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:31.572661   13136 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:31.572661   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-749300
	I0203 12:28:31.572806   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.572806   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.572854   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.598672   13136 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0203 12:28:31.598672   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.598672   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.598672   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.598779   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.598779   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.598779   13136 round_trippers.go:580]     Audit-Id: 573796a6-41ab-40ae-a42f-ff02650f9572
	I0203 12:28:31.598779   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.601089   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-749300","namespace":"kube-system","uid":"a956084b-f454-4ef5-8fed-7c189cb74ab0","resourceVersion":"1876","creationTimestamp":"2025-02-03T12:27:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.12.244:2379","kubernetes.io/config.hash":"f85eb916773a482447e41aa40aaff233","kubernetes.io/config.mirror":"f85eb916773a482447e41aa40aaff233","kubernetes.io/config.seen":"2025-02-03T12:27:19.750780815Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:27:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6606 chars]
	I0203 12:28:31.601089   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:31.601089   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.601089   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.601089   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.605966   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:31.605966   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.605966   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.605966   13136 round_trippers.go:580]     Audit-Id: 67d7be9d-84ce-4bcf-8912-b746a247e527
	I0203 12:28:31.605966   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.605966   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.605966   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.605966   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.605966   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:31.606969   13136 pod_ready.go:93] pod "etcd-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:28:31.606969   13136 pod_ready.go:82] duration metric: took 34.3069ms for pod "etcd-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:31.606969   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:31.606969   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-749300
	I0203 12:28:31.606969   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.606969   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.606969   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.617178   13136 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0203 12:28:31.617178   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.617178   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.617178   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.617178   13136 round_trippers.go:580]     Audit-Id: a27a8639-956c-4b3f-b490-54fcfff8f4fc
	I0203 12:28:31.617178   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.617178   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.617178   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.617436   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-749300","namespace":"kube-system","uid":"72513861-07f4-4533-8f55-8b3cce215b4c","resourceVersion":"1856","creationTimestamp":"2025-02-03T12:27:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.12.244:8443","kubernetes.io/config.hash":"20275825c8d44051c01f8d920b297acd","kubernetes.io/config.mirror":"20275825c8d44051c01f8d920b297acd","kubernetes.io/config.seen":"2025-02-03T12:27:19.750137111Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:27:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8039 chars]
	I0203 12:28:31.617622   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:31.617622   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.617622   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.617622   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.622274   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:31.622274   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.622274   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.622274   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.622274   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.622274   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.622274   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.622274   13136 round_trippers.go:580]     Audit-Id: a15c1bfc-823c-4b32-bbe7-30d292318a28
	I0203 12:28:31.622484   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:31.622934   13136 pod_ready.go:93] pod "kube-apiserver-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:28:31.622934   13136 pod_ready.go:82] duration metric: took 15.9656ms for pod "kube-apiserver-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:31.622986   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:31.623052   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-749300
	I0203 12:28:31.623110   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.623110   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.623110   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.625111   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:28:31.625111   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.625111   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.625111   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.625111   13136 round_trippers.go:580]     Audit-Id: 6b4b99f0-968a-4a8a-b3bc-fda4c02702e5
	I0203 12:28:31.625111   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.625111   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.625111   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.625111   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-749300","namespace":"kube-system","uid":"63c0818c-a0e6-40d1-b0c4-1cd633c91afb","resourceVersion":"1874","creationTimestamp":"2025-02-03T12:04:55Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c25845f184856fc216b76acafcf34ee9","kubernetes.io/config.mirror":"c25845f184856fc216b76acafcf34ee9","kubernetes.io/config.seen":"2025-02-03T12:04:55.455020645Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:04:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0203 12:28:31.626252   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:31.626354   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.626354   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.626354   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.629417   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:31.629417   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.629417   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.629417   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.629417   13136 round_trippers.go:580]     Audit-Id: a46ba0bc-ca35-4a8a-aa06-7b13154e94f1
	I0203 12:28:31.629417   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.629529   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.629529   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.629643   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:31.629990   13136 pod_ready.go:93] pod "kube-controller-manager-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:28:31.630057   13136 pod_ready.go:82] duration metric: took 7.0706ms for pod "kube-controller-manager-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:31.630057   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9g92t" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:31.630140   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g92t
	I0203 12:28:31.630140   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.630140   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.630204   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.635858   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:31.635858   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.635858   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.635858   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.635858   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.635858   13136 round_trippers.go:580]     Audit-Id: 5cd54abc-0f91-4c41-a973-f79f65739895
	I0203 12:28:31.635858   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.635858   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.636393   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9g92t","generateName":"kube-proxy-","namespace":"kube-system","uid":"1709b874-4fee-41f5-8d30-24912b2fa725","resourceVersion":"1844","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"04519c88-48ba-439f-bd57-a9c8b286d988","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04519c88-48ba-439f-bd57-a9c8b286d988\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6400 chars]
	I0203 12:28:31.637046   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:31.637117   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.637117   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.637117   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.638945   13136 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0203 12:28:31.638945   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.638945   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.638945   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.638945   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.638945   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.638945   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.638945   13136 round_trippers.go:580]     Audit-Id: fc4f78ce-c1ec-417f-9904-7c02501c5ed4
	I0203 12:28:31.639945   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:31.639945   13136 pod_ready.go:93] pod "kube-proxy-9g92t" in "kube-system" namespace has status "Ready":"True"
	I0203 12:28:31.639945   13136 pod_ready.go:82] duration metric: took 9.8881ms for pod "kube-proxy-9g92t" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:31.639945   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ggnq7" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:31.742759   13136 request.go:632] Waited for 102.8128ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggnq7
	I0203 12:28:31.743047   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggnq7
	I0203 12:28:31.743047   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.743047   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.743047   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.746547   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:31.746662   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.746662   13136 round_trippers.go:580]     Audit-Id: cc4e6c0e-add0-42fa-aa85-f37f000c5894
	I0203 12:28:31.746662   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.746662   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.746662   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.746662   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.746662   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.747165   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ggnq7","generateName":"kube-proxy-","namespace":"kube-system","uid":"63bc9e77-90e3-40c5-9b49-e95d2bfd7426","resourceVersion":"1930","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"04519c88-48ba-439f-bd57-a9c8b286d988","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04519c88-48ba-439f-bd57-a9c8b286d988\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6418 chars]
	I0203 12:28:31.942605   13136 request.go:632] Waited for 194.7608ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:28:31.942905   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:28:31.942905   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.942905   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.942905   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.947358   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:31.947487   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.947487   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.947487   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.947487   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.947487   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.947487   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.947487   13136 round_trippers.go:580]     Audit-Id: 784c911b-32aa-4cdd-8b7c-197fb7ddb09f
	I0203 12:28:31.947666   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"1941","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4581 chars]
	I0203 12:28:31.947666   13136 pod_ready.go:98] node "multinode-749300-m02" hosting pod "kube-proxy-ggnq7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300-m02" has status "Ready":"Unknown"
	I0203 12:28:31.947666   13136 pod_ready.go:82] duration metric: took 307.7175ms for pod "kube-proxy-ggnq7" in "kube-system" namespace to be "Ready" ...
	E0203 12:28:31.947666   13136 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-749300-m02" hosting pod "kube-proxy-ggnq7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300-m02" has status "Ready":"Unknown"
	I0203 12:28:31.947666   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w8wrd" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:32.141811   13136 request.go:632] Waited for 193.5919ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w8wrd
	I0203 12:28:32.141811   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w8wrd
	I0203 12:28:32.141811   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:32.141811   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:32.141811   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:32.147080   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:32.147080   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:32.147080   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:32.147080   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:32.147080   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:32.147080   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:32.147080   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:32 GMT
	I0203 12:28:32.147080   13136 round_trippers.go:580]     Audit-Id: 0ae124bf-2979-42be-97ac-1e26c8b29976
	I0203 12:28:32.147080   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-w8wrd","generateName":"kube-proxy-","namespace":"kube-system","uid":"f81878fa-528f-4bdf-90ec-83f54166370e","resourceVersion":"1727","creationTimestamp":"2025-02-03T12:12:30Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"04519c88-48ba-439f-bd57-a9c8b286d988","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:12:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04519c88-48ba-439f-bd57-a9c8b286d988\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6418 chars]
	I0203 12:28:32.341896   13136 request.go:632] Waited for 193.2518ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m03
	I0203 12:28:32.342213   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m03
	I0203 12:28:32.342213   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:32.342213   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:32.342213   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:32.346635   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:32.346702   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:32.346702   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:32.346702   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:32.346702   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:32.346702   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:32 GMT
	I0203 12:28:32.346702   13136 round_trippers.go:580]     Audit-Id: 3eff7c93-2d6e-46bd-a958-4fd9539cec09
	I0203 12:28:32.346702   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:32.346982   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m03","uid":"1765fbe7-e04a-4337-8284-6152642b17de","resourceVersion":"1838","creationTimestamp":"2025-02-03T12:22:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_22_58_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:22:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4398 chars]
	I0203 12:28:32.347387   13136 pod_ready.go:98] node "multinode-749300-m03" hosting pod "kube-proxy-w8wrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300-m03" has status "Ready":"Unknown"
	I0203 12:28:32.347449   13136 pod_ready.go:82] duration metric: took 399.7785ms for pod "kube-proxy-w8wrd" in "kube-system" namespace to be "Ready" ...
	E0203 12:28:32.347449   13136 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-749300-m03" hosting pod "kube-proxy-w8wrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300-m03" has status "Ready":"Unknown"
	I0203 12:28:32.347449   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:32.542125   13136 request.go:632] Waited for 194.5194ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-749300
	I0203 12:28:32.542125   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-749300
	I0203 12:28:32.542125   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:32.542125   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:32.542125   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:32.546693   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:32.546693   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:32.546693   13136 round_trippers.go:580]     Audit-Id: 2a24baac-a3ee-4b48-a042-ebe7fe6b8e7a
	I0203 12:28:32.546693   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:32.546782   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:32.546782   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:32.546782   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:32.546782   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:32 GMT
	I0203 12:28:32.546943   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-749300","namespace":"kube-system","uid":"8e4c1052-9dca-466d-833b-eff318b977d7","resourceVersion":"1864","creationTimestamp":"2025-02-03T12:04:55Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a4dc8a8db691940bb17375ec22c0921e","kubernetes.io/config.mirror":"a4dc8a8db691940bb17375ec22c0921e","kubernetes.io/config.seen":"2025-02-03T12:04:55.455022345Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:04:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5563 chars]
	I0203 12:28:32.742517   13136 request.go:632] Waited for 195.1713ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:32.742517   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:32.742517   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:32.742517   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:32.742517   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:32.747535   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:32.747535   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:32.747535   13136 round_trippers.go:580]     Audit-Id: c5843651-ee5e-49ca-b2eb-51c8601ada71
	I0203 12:28:32.747535   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:32.747535   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:32.747535   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:32.747535   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:32.747535   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:32 GMT
	I0203 12:28:32.747535   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:32.748122   13136 pod_ready.go:93] pod "kube-scheduler-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:28:32.748122   13136 pod_ready.go:82] duration metric: took 400.596ms for pod "kube-scheduler-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:32.748122   13136 pod_ready.go:39] duration metric: took 22.7307157s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 12:28:32.748122   13136 api_server.go:52] waiting for apiserver process to appear ...
	I0203 12:28:32.755751   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 12:28:32.785041   13136 command_runner.go:130] > 6c19e0a0ba9c
	I0203 12:28:32.785041   13136 logs.go:282] 1 containers: [6c19e0a0ba9c]
	I0203 12:28:32.792964   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 12:28:32.821550   13136 command_runner.go:130] > 09707a862965
	I0203 12:28:32.821550   13136 logs.go:282] 1 containers: [09707a862965]
	I0203 12:28:32.829459   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 12:28:32.853753   13136 command_runner.go:130] > edb5f00f1042
	I0203 12:28:32.853753   13136 command_runner.go:130] > fe91a8d012ae
	I0203 12:28:32.853753   13136 logs.go:282] 2 containers: [edb5f00f1042 fe91a8d012ae]
	I0203 12:28:32.861445   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 12:28:32.884026   13136 command_runner.go:130] > 2e43c2ecb4a9
	I0203 12:28:32.884838   13136 command_runner.go:130] > 88c40ca9aa3c
	I0203 12:28:32.884838   13136 logs.go:282] 2 containers: [2e43c2ecb4a9 88c40ca9aa3c]
	I0203 12:28:32.895690   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 12:28:32.921034   13136 command_runner.go:130] > cf33452e7244
	I0203 12:28:32.921034   13136 command_runner.go:130] > c6dc514e98f6
	I0203 12:28:32.921034   13136 logs.go:282] 2 containers: [cf33452e7244 c6dc514e98f6]
	I0203 12:28:32.929105   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 12:28:32.957040   13136 command_runner.go:130] > fa5ab1df8985
	I0203 12:28:32.957099   13136 command_runner.go:130] > 8ade10c0fb09
	I0203 12:28:32.957208   13136 logs.go:282] 2 containers: [fa5ab1df8985 8ade10c0fb09]
	I0203 12:28:32.966192   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0203 12:28:32.998981   13136 command_runner.go:130] > 644890f5738e
	I0203 12:28:32.998981   13136 command_runner.go:130] > fab2d9be6b5c
	I0203 12:28:32.998981   13136 logs.go:282] 2 containers: [644890f5738e fab2d9be6b5c]
	I0203 12:28:33.000010   13136 logs.go:123] Gathering logs for kube-scheduler [2e43c2ecb4a9] ...
	I0203 12:28:33.000055   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e43c2ecb4a9"
	I0203 12:28:33.028303   13136 command_runner.go:130] ! I0203 12:27:23.141470       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:33.028429   13136 command_runner.go:130] ! W0203 12:27:24.897433       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0203 12:28:33.028487   13136 command_runner.go:130] ! W0203 12:27:24.897513       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:33.028542   13136 command_runner.go:130] ! W0203 12:27:24.897526       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0203 12:28:33.028542   13136 command_runner.go:130] ! W0203 12:27:24.897538       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0203 12:28:33.028595   13136 command_runner.go:130] ! I0203 12:27:25.033204       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0203 12:28:33.028675   13136 command_runner.go:130] ! I0203 12:27:25.033541       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:33.028709   13136 command_runner.go:130] ! I0203 12:27:25.041065       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0203 12:28:33.028762   13136 command_runner.go:130] ! I0203 12:27:25.044977       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:33.028823   13136 command_runner.go:130] ! I0203 12:27:25.045234       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 12:28:33.028879   13136 command_runner.go:130] ! I0203 12:27:25.045638       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:33.028879   13136 command_runner.go:130] ! I0203 12:27:25.146094       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:33.031822   13136 logs.go:123] Gathering logs for kube-controller-manager [8ade10c0fb09] ...
	I0203 12:28:33.031871   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ade10c0fb09"
	I0203 12:28:33.074302   13136 command_runner.go:130] ! I0203 12:04:50.328199       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:50.683234       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:50.683563       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:50.687907       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:50.687950       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:50.687972       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:50.687984       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.071680       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.072106       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.089226       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.089889       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.091177       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.113934       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.114137       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.114294       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.115111       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.143403       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.146241       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.146450       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.167456       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.168207       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.169697       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.170035       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.172429       1 shared_informer.go:320] Caches are synced for tokens
	I0203 12:28:33.074899   13136 command_runner.go:130] ! W0203 12:04:55.207419       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0203 12:28:33.074899   13136 command_runner.go:130] ! I0203 12:04:55.220184       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0203 12:28:33.074899   13136 command_runner.go:130] ! I0203 12:04:55.220335       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0203 12:28:33.075004   13136 command_runner.go:130] ! I0203 12:04:55.220802       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0203 12:28:33.075004   13136 command_runner.go:130] ! I0203 12:04:55.220818       1 shared_informer.go:313] Waiting for caches to sync for node
	I0203 12:28:33.075004   13136 command_runner.go:130] ! I0203 12:04:55.236689       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0203 12:28:33.075004   13136 command_runner.go:130] ! I0203 12:04:55.236985       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0203 12:28:33.075004   13136 command_runner.go:130] ! I0203 12:04:55.237003       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0203 12:28:33.075004   13136 command_runner.go:130] ! I0203 12:04:55.260414       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0203 12:28:33.075004   13136 command_runner.go:130] ! I0203 12:04:55.260996       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0203 12:28:33.075135   13136 command_runner.go:130] ! I0203 12:04:55.261428       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0203 12:28:33.075135   13136 command_runner.go:130] ! I0203 12:04:55.289640       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0203 12:28:33.075135   13136 command_runner.go:130] ! I0203 12:04:55.289893       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0203 12:28:33.075135   13136 command_runner.go:130] ! I0203 12:04:55.290571       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0203 12:28:33.075135   13136 command_runner.go:130] ! I0203 12:04:55.290736       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0203 12:28:33.075135   13136 command_runner.go:130] ! I0203 12:04:55.314846       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0203 12:28:33.075256   13136 command_runner.go:130] ! I0203 12:04:55.315076       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0203 12:28:33.075256   13136 command_runner.go:130] ! I0203 12:04:55.315101       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0203 12:28:33.075256   13136 command_runner.go:130] ! I0203 12:04:55.319462       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0203 12:28:33.075256   13136 command_runner.go:130] ! I0203 12:04:55.319527       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0203 12:28:33.075339   13136 command_runner.go:130] ! I0203 12:04:55.319535       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0203 12:28:33.075339   13136 command_runner.go:130] ! I0203 12:04:55.319689       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0203 12:28:33.075339   13136 command_runner.go:130] ! I0203 12:04:55.319723       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0203 12:28:33.075339   13136 command_runner.go:130] ! I0203 12:04:55.319733       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0203 12:28:33.075339   13136 command_runner.go:130] ! I0203 12:04:55.446823       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0203 12:28:33.075422   13136 command_runner.go:130] ! I0203 12:04:55.446851       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0203 12:28:33.075422   13136 command_runner.go:130] ! I0203 12:04:55.446960       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0203 12:28:33.075502   13136 command_runner.go:130] ! I0203 12:04:55.446972       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0203 12:28:33.075502   13136 command_runner.go:130] ! I0203 12:04:55.579930       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0203 12:28:33.075502   13136 command_runner.go:130] ! I0203 12:04:55.580047       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0203 12:28:33.075502   13136 command_runner.go:130] ! I0203 12:04:55.580079       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0203 12:28:33.075502   13136 command_runner.go:130] ! I0203 12:04:55.730127       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0203 12:28:33.075582   13136 command_runner.go:130] ! I0203 12:04:55.730301       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0203 12:28:33.075582   13136 command_runner.go:130] ! I0203 12:04:55.730314       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0203 12:28:33.075582   13136 command_runner.go:130] ! I0203 12:04:55.889482       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0203 12:28:33.075662   13136 command_runner.go:130] ! I0203 12:04:55.889843       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0203 12:28:33.075662   13136 command_runner.go:130] ! I0203 12:04:55.889907       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0203 12:28:33.075662   13136 command_runner.go:130] ! I0203 12:04:56.030244       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0203 12:28:33.075662   13136 command_runner.go:130] ! I0203 12:04:56.030535       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0203 12:28:33.075745   13136 command_runner.go:130] ! I0203 12:04:56.030566       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0203 12:28:33.075745   13136 command_runner.go:130] ! I0203 12:04:56.182222       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0203 12:28:33.075745   13136 command_runner.go:130] ! I0203 12:04:56.183153       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0203 12:28:33.075745   13136 command_runner.go:130] ! I0203 12:04:56.183191       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0203 12:28:33.075824   13136 command_runner.go:130] ! I0203 12:04:56.226256       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0203 12:28:33.075824   13136 command_runner.go:130] ! I0203 12:04:56.226280       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0203 12:28:33.075824   13136 command_runner.go:130] ! I0203 12:04:56.226330       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0203 12:28:33.075903   13136 command_runner.go:130] ! I0203 12:04:56.226371       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0203 12:28:33.075903   13136 command_runner.go:130] ! I0203 12:04:56.226410       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0203 12:28:33.075903   13136 command_runner.go:130] ! I0203 12:04:56.382971       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0203 12:28:33.075903   13136 command_runner.go:130] ! I0203 12:04:56.383201       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0203 12:28:33.075981   13136 command_runner.go:130] ! I0203 12:04:56.383218       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0203 12:28:33.075981   13136 command_runner.go:130] ! I0203 12:04:56.687449       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0203 12:28:33.075981   13136 command_runner.go:130] ! I0203 12:04:56.687532       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0203 12:28:33.075981   13136 command_runner.go:130] ! I0203 12:04:56.687548       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0203 12:28:33.076064   13136 command_runner.go:130] ! I0203 12:04:56.832606       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0203 12:28:33.076064   13136 command_runner.go:130] ! I0203 12:04:56.832640       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0203 12:28:33.076064   13136 command_runner.go:130] ! I0203 12:04:56.832542       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0203 12:28:33.076064   13136 command_runner.go:130] ! I0203 12:04:56.984351       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0203 12:28:33.076064   13136 command_runner.go:130] ! I0203 12:04:56.984538       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0203 12:28:33.076143   13136 command_runner.go:130] ! I0203 12:04:56.984550       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0203 12:28:33.076143   13136 command_runner.go:130] ! I0203 12:04:57.130440       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0203 12:28:33.076143   13136 command_runner.go:130] ! I0203 12:04:57.131375       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0203 12:28:33.076143   13136 command_runner.go:130] ! I0203 12:04:57.131428       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0203 12:28:33.076224   13136 command_runner.go:130] ! I0203 12:04:57.284265       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:33.076224   13136 command_runner.go:130] ! I0203 12:04:57.284330       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:33.076224   13136 command_runner.go:130] ! I0203 12:04:57.284343       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0203 12:28:33.076302   13136 command_runner.go:130] ! I0203 12:04:57.431498       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0203 12:28:33.076302   13136 command_runner.go:130] ! I0203 12:04:57.431577       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0203 12:28:33.076302   13136 command_runner.go:130] ! I0203 12:04:57.432308       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0203 12:28:33.076302   13136 command_runner.go:130] ! I0203 12:04:57.580329       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0203 12:28:33.076386   13136 command_runner.go:130] ! I0203 12:04:57.580661       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0203 12:28:33.076386   13136 command_runner.go:130] ! I0203 12:04:57.580693       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0203 12:28:33.076386   13136 command_runner.go:130] ! I0203 12:04:57.730504       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0203 12:28:33.076465   13136 command_runner.go:130] ! I0203 12:04:57.730629       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0203 12:28:33.076465   13136 command_runner.go:130] ! I0203 12:04:57.730638       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0203 12:28:33.076465   13136 command_runner.go:130] ! I0203 12:04:57.730646       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0203 12:28:33.076542   13136 command_runner.go:130] ! I0203 12:04:57.730719       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0203 12:28:33.076542   13136 command_runner.go:130] ! I0203 12:04:57.730820       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0203 12:28:33.076542   13136 command_runner.go:130] ! I0203 12:04:57.880536       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0203 12:28:33.076542   13136 command_runner.go:130] ! I0203 12:04:57.880666       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0203 12:28:33.076626   13136 command_runner.go:130] ! I0203 12:04:57.881079       1 shared_informer.go:313] Waiting for caches to sync for job
	I0203 12:28:33.076626   13136 command_runner.go:130] ! I0203 12:04:58.186601       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0203 12:28:33.076626   13136 command_runner.go:130] ! I0203 12:04:58.186797       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0203 12:28:33.076626   13136 command_runner.go:130] ! I0203 12:04:58.187086       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0203 12:28:33.076706   13136 command_runner.go:130] ! W0203 12:04:58.187187       1 shared_informer.go:597] resyncPeriod 18h8m42.862796871s is smaller than resyncCheckPeriod 21h1m9.302357504s and the informer has already started. Changing it to 21h1m9.302357504s
	I0203 12:28:33.076706   13136 command_runner.go:130] ! I0203 12:04:58.187252       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0203 12:28:33.076706   13136 command_runner.go:130] ! I0203 12:04:58.187334       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0203 12:28:33.076789   13136 command_runner.go:130] ! I0203 12:04:58.187356       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0203 12:28:33.076789   13136 command_runner.go:130] ! I0203 12:04:58.187374       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0203 12:28:33.076789   13136 command_runner.go:130] ! I0203 12:04:58.187391       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0203 12:28:33.076869   13136 command_runner.go:130] ! I0203 12:04:58.187427       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0203 12:28:33.076869   13136 command_runner.go:130] ! I0203 12:04:58.187455       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0203 12:28:33.076869   13136 command_runner.go:130] ! W0203 12:04:58.187474       1 shared_informer.go:597] resyncPeriod 19h41m25.464232572s is smaller than resyncCheckPeriod 21h1m9.302357504s and the informer has already started. Changing it to 21h1m9.302357504s
	I0203 12:28:33.076869   13136 command_runner.go:130] ! I0203 12:04:58.187523       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0203 12:28:33.076952   13136 command_runner.go:130] ! I0203 12:04:58.187588       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0203 12:28:33.076952   13136 command_runner.go:130] ! I0203 12:04:58.187662       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0203 12:28:33.076952   13136 command_runner.go:130] ! I0203 12:04:58.187679       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0203 12:28:33.076952   13136 command_runner.go:130] ! I0203 12:04:58.187699       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0203 12:28:33.076952   13136 command_runner.go:130] ! I0203 12:04:58.187967       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0203 12:28:33.076952   13136 command_runner.go:130] ! I0203 12:04:58.188030       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0203 12:28:33.077141   13136 command_runner.go:130] ! I0203 12:04:58.188069       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0203 12:28:33.077141   13136 command_runner.go:130] ! I0203 12:04:58.188097       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0203 12:28:33.077189   13136 command_runner.go:130] ! I0203 12:04:58.188127       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0203 12:28:33.077189   13136 command_runner.go:130] ! I0203 12:04:58.188181       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0203 12:28:33.077239   13136 command_runner.go:130] ! I0203 12:04:58.188248       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0203 12:28:33.077239   13136 command_runner.go:130] ! I0203 12:04:58.188271       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:33.077276   13136 command_runner.go:130] ! I0203 12:04:58.188294       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0203 12:28:33.077292   13136 command_runner.go:130] ! I0203 12:04:58.434011       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0203 12:28:33.077292   13136 command_runner.go:130] ! I0203 12:04:58.434132       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0203 12:28:33.077364   13136 command_runner.go:130] ! I0203 12:04:58.434145       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0203 12:28:33.077364   13136 command_runner.go:130] ! I0203 12:04:58.476316       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0203 12:28:33.077364   13136 command_runner.go:130] ! I0203 12:04:58.478848       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0203 12:28:33.077364   13136 command_runner.go:130] ! I0203 12:04:58.478330       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0203 12:28:33.077364   13136 command_runner.go:130] ! I0203 12:04:58.478362       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:33.077448   13136 command_runner.go:130] ! I0203 12:04:58.478346       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0203 12:28:33.077448   13136 command_runner.go:130] ! I0203 12:04:58.479085       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0203 12:28:33.077448   13136 command_runner.go:130] ! I0203 12:04:58.478432       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0203 12:28:33.077528   13136 command_runner.go:130] ! I0203 12:04:58.479097       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0203 12:28:33.077528   13136 command_runner.go:130] ! I0203 12:04:58.478442       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:33.077528   13136 command_runner.go:130] ! I0203 12:04:58.478482       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0203 12:28:33.077610   13136 command_runner.go:130] ! I0203 12:04:58.479316       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:33.077610   13136 command_runner.go:130] ! I0203 12:04:58.478490       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:33.077610   13136 command_runner.go:130] ! I0203 12:04:58.478533       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:33.077610   13136 command_runner.go:130] ! I0203 12:04:58.630437       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0203 12:28:33.077689   13136 command_runner.go:130] ! I0203 12:04:58.630476       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0203 12:28:33.077689   13136 command_runner.go:130] ! I0203 12:04:58.630884       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0203 12:28:33.077689   13136 command_runner.go:130] ! I0203 12:04:58.630985       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0203 12:28:33.077689   13136 command_runner.go:130] ! I0203 12:04:58.825850       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0203 12:28:33.077689   13136 command_runner.go:130] ! I0203 12:04:58.826005       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0203 12:28:33.077775   13136 command_runner.go:130] ! I0203 12:04:59.025218       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0203 12:28:33.077775   13136 command_runner.go:130] ! I0203 12:04:59.025576       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0203 12:28:33.077775   13136 command_runner.go:130] ! I0203 12:04:59.025879       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0203 12:28:33.077775   13136 command_runner.go:130] ! I0203 12:04:59.026140       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0203 12:28:33.077847   13136 command_runner.go:130] ! I0203 12:04:59.076054       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0203 12:28:33.077847   13136 command_runner.go:130] ! I0203 12:04:59.076201       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0203 12:28:33.077847   13136 command_runner.go:130] ! I0203 12:04:59.229685       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0203 12:28:33.077847   13136 command_runner.go:130] ! I0203 12:04:59.229852       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0203 12:28:33.077928   13136 command_runner.go:130] ! I0203 12:04:59.384463       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0203 12:28:33.077928   13136 command_runner.go:130] ! I0203 12:04:59.384562       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0203 12:28:33.077928   13136 command_runner.go:130] ! I0203 12:04:59.384584       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0203 12:28:33.077928   13136 command_runner.go:130] ! I0203 12:04:59.384709       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0203 12:28:33.078011   13136 command_runner.go:130] ! I0203 12:04:59.384734       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0203 12:28:33.078011   13136 command_runner.go:130] ! I0203 12:04:59.531643       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0203 12:28:33.078011   13136 command_runner.go:130] ! I0203 12:04:59.535171       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0203 12:28:33.078011   13136 command_runner.go:130] ! I0203 12:04:59.535208       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0203 12:28:33.078011   13136 command_runner.go:130] ! I0203 12:04:59.555530       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:33.078091   13136 command_runner.go:130] ! I0203 12:04:59.582679       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300\" does not exist"
	I0203 12:28:33.078091   13136 command_runner.go:130] ! I0203 12:04:59.593117       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:33.078091   13136 command_runner.go:130] ! I0203 12:04:59.615597       1 shared_informer.go:320] Caches are synced for expand
	I0203 12:28:33.078173   13136 command_runner.go:130] ! I0203 12:04:59.619951       1 shared_informer.go:320] Caches are synced for taint
	I0203 12:28:33.078173   13136 command_runner.go:130] ! I0203 12:04:59.620233       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0203 12:28:33.078173   13136 command_runner.go:130] ! I0203 12:04:59.621144       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300"
	I0203 12:28:33.078255   13136 command_runner.go:130] ! I0203 12:04:59.621999       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0203 12:28:33.078255   13136 command_runner.go:130] ! I0203 12:04:59.620965       1 shared_informer.go:320] Caches are synced for node
	I0203 12:28:33.078255   13136 command_runner.go:130] ! I0203 12:04:59.622115       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0203 12:28:33.078255   13136 command_runner.go:130] ! I0203 12:04:59.622196       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0203 12:28:33.078255   13136 command_runner.go:130] ! I0203 12:04:59.622213       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0203 12:28:33.078337   13136 command_runner.go:130] ! I0203 12:04:59.622220       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0203 12:28:33.078337   13136 command_runner.go:130] ! I0203 12:04:59.627214       1 shared_informer.go:320] Caches are synced for disruption
	I0203 12:28:33.078337   13136 command_runner.go:130] ! I0203 12:04:59.627299       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0203 12:28:33.078337   13136 command_runner.go:130] ! I0203 12:04:59.627517       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0203 12:28:33.078337   13136 command_runner.go:130] ! I0203 12:04:59.630821       1 shared_informer.go:320] Caches are synced for persistent volume
	I0203 12:28:33.078416   13136 command_runner.go:130] ! I0203 12:04:59.631018       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0203 12:28:33.078416   13136 command_runner.go:130] ! I0203 12:04:59.631607       1 shared_informer.go:320] Caches are synced for PV protection
	I0203 12:28:33.078416   13136 command_runner.go:130] ! I0203 12:04:59.632152       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0203 12:28:33.078416   13136 command_runner.go:130] ! I0203 12:04:59.632358       1 shared_informer.go:320] Caches are synced for service account
	I0203 12:28:33.078495   13136 command_runner.go:130] ! I0203 12:04:59.632692       1 shared_informer.go:320] Caches are synced for cronjob
	I0203 12:28:33.078495   13136 command_runner.go:130] ! I0203 12:04:59.632840       1 shared_informer.go:320] Caches are synced for TTL
	I0203 12:28:33.078495   13136 command_runner.go:130] ! I0203 12:04:59.634133       1 shared_informer.go:320] Caches are synced for GC
	I0203 12:28:33.078495   13136 command_runner.go:130] ! I0203 12:04:59.634183       1 shared_informer.go:320] Caches are synced for namespace
	I0203 12:28:33.078495   13136 command_runner.go:130] ! I0203 12:04:59.637337       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0203 12:28:33.078576   13136 command_runner.go:130] ! I0203 12:04:59.637530       1 shared_informer.go:320] Caches are synced for crt configmap
	I0203 12:28:33.078576   13136 command_runner.go:130] ! I0203 12:04:59.644447       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300" podCIDRs=["10.244.0.0/24"]
	I0203 12:28:33.078576   13136 command_runner.go:130] ! I0203 12:04:59.644496       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.078576   13136 command_runner.go:130] ! I0203 12:04:59.644518       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.078576   13136 command_runner.go:130] ! I0203 12:04:59.647453       1 shared_informer.go:320] Caches are synced for deployment
	I0203 12:28:33.078658   13136 command_runner.go:130] ! I0203 12:04:59.647468       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0203 12:28:33.078658   13136 command_runner.go:130] ! I0203 12:04:59.661087       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:33.078658   13136 command_runner.go:130] ! I0203 12:04:59.662500       1 shared_informer.go:320] Caches are synced for ephemeral
	I0203 12:28:33.078658   13136 command_runner.go:130] ! I0203 12:04:59.679063       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0203 12:28:33.078731   13136 command_runner.go:130] ! I0203 12:04:59.679241       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0203 12:28:33.078731   13136 command_runner.go:130] ! I0203 12:04:59.679489       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:33.078731   13136 command_runner.go:130] ! I0203 12:04:59.679271       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0203 12:28:33.078731   13136 command_runner.go:130] ! I0203 12:04:59.680515       1 shared_informer.go:320] Caches are synced for daemon sets
	I0203 12:28:33.078731   13136 command_runner.go:130] ! I0203 12:04:59.680894       1 shared_informer.go:320] Caches are synced for stateful set
	I0203 12:28:33.078731   13136 command_runner.go:130] ! I0203 12:04:59.682157       1 shared_informer.go:320] Caches are synced for job
	I0203 12:28:33.078810   13136 command_runner.go:130] ! I0203 12:04:59.686733       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0203 12:28:33.078810   13136 command_runner.go:130] ! I0203 12:04:59.688328       1 shared_informer.go:320] Caches are synced for HPA
	I0203 12:28:33.078969   13136 command_runner.go:130] ! I0203 12:04:59.688383       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0203 12:28:33.078969   13136 command_runner.go:130] ! I0203 12:04:59.697934       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0203 12:28:33.079052   13136 command_runner.go:130] ! I0203 12:04:59.698063       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0203 12:28:33.079052   13136 command_runner.go:130] ! I0203 12:04:59.688399       1 shared_informer.go:320] Caches are synced for PVC protection
	I0203 12:28:33.079052   13136 command_runner.go:130] ! I0203 12:04:59.688409       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0203 12:28:33.079052   13136 command_runner.go:130] ! I0203 12:04:59.688419       1 shared_informer.go:320] Caches are synced for attach detach
	I0203 12:28:33.079133   13136 command_runner.go:130] ! I0203 12:04:59.688482       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:33.079165   13136 command_runner.go:130] ! I0203 12:04:59.697636       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:33.079165   13136 command_runner.go:130] ! I0203 12:04:59.697649       1 shared_informer.go:320] Caches are synced for endpoint
	I0203 12:28:33.079196   13136 command_runner.go:130] ! I0203 12:04:59.714625       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:33.079211   13136 command_runner.go:130] ! I0203 12:04:59.714677       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0203 12:28:33.079237   13136 command_runner.go:130] ! I0203 12:04:59.714688       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:00.046777       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:00.818009       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="311.273381ms"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:00.848718       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="30.361418ms"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:00.848801       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.501µs"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:01.040466       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="91.174094ms"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:01.060761       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.181113ms"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:01.062232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="51.701µs"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:21.819966       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:21.843034       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:21.853094       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="295.503µs"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:21.889706       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="83.9µs"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:23.170298       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="56.1µs"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:24.187762       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="23.236374ms"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:24.188513       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="90.9µs"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:24.626780       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:26.205271       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:07:57.197252       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m02\" does not exist"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:07:57.213772       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300-m02" podCIDRs=["10.244.1.0/24"]
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:07:57.214096       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:07:57.214387       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:07:57.243166       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:07:57.578919       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:07:58.163164       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:07:59.655130       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m02"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:07:59.772999       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:08:07.534314       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:08:26.797682       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:08:26.797755       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:08:26.813836       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079787   13136 command_runner.go:130] ! I0203 12:08:28.192212       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079787   13136 command_runner.go:130] ! I0203 12:08:29.680135       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079787   13136 command_runner.go:130] ! I0203 12:08:30.702586       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.079787   13136 command_runner.go:130] ! I0203 12:08:51.029918       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="72.629315ms"
	I0203 12:28:33.079787   13136 command_runner.go:130] ! I0203 12:08:51.048475       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="16.732326ms"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:08:51.049169       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="396.601µs"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:08:51.058159       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="35.9µs"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:08:51.069790       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="40.1µs"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:08:53.787260       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="12.580521ms"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:08:53.787659       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="70.201µs"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:08:53.939078       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="12.55302ms"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:08:53.939506       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="33.801µs"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:08:58.516195       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:09:01.710207       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:30.158978       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m03\" does not exist"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:30.160493       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:30.187436       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300-m03" podCIDRs=["10.244.2.0/24"]
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:30.187486       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:30.187520       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:30.195215       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:30.643712       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:31.194036       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:34.733168       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:34.818129       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:40.541982       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:59.598308       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:59.598384       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:59.613509       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:59.761059       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:13:01.072377       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:13:02.975699       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:16:00.817386       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:17:16.830447       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:18:09.728117       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:20:44.872410       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:33.080393   13136 command_runner.go:130] ! I0203 12:20:44.874163       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080393   13136 command_runner.go:130] ! I0203 12:20:44.902212       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080393   13136 command_runner.go:130] ! I0203 12:20:50.011997       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080393   13136 command_runner.go:130] ! I0203 12:21:07.487830       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:22:48.017949       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:22:48.044428       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:22:52.915959       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:22:58.370520       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:22:58.373994       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m03\" does not exist"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:22:58.409838       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300-m03" podCIDRs=["10.244.3.0/24"]
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:22:58.410167       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! E0203 12:22:58.438530       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-749300-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-749300-m03" podCIDRs=["10.244.4.0/24"]
	I0203 12:28:33.080474   13136 command_runner.go:130] ! E0203 12:22:58.438947       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-749300-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! E0203 12:22:58.439229       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-749300-m03': failed to patch node CIDR: Node \"multinode-749300-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:22:58.439401       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:22:58.444440       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:22:58.960922       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:22:59.994381       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:23:08.704715       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:23:13.216732       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:23:13.218582       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:23:13.233034       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:23:14.968424       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:23:15.606788       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:24:50.048901       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:24:50.049506       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:24:50.231618       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:24:55.449570       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.101987   13136 logs.go:123] Gathering logs for etcd [09707a862965] ...
	I0203 12:28:33.101987   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09707a862965"
	I0203 12:28:33.135709   13136 command_runner.go:130] ! {"level":"warn","ts":"2025-02-03T12:27:21.807150Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.807376Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.25.12.244:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.25.12.244:2380","--initial-cluster=multinode-749300=https://172.25.12.244:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.25.12.244:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.25.12.244:2380","--name=multinode-749300","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.810076Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"warn","ts":"2025-02-03T12:27:21.810110Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.810121Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.25.12.244:2380"]}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.810165Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.813162Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.25.12.244:2379"]}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.815738Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-749300","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.25.12.244:2380"],"listen-peer-urls":["https://172.25.12.244:2380"],"advertise-client-urls":["https://172.25.12.244:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.12.244:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-c
luster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.836502Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"19.618913ms"}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.860600Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.876663Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"bd3b09816c9d03a4","local-member-id":"aee9b6e79987349e","commit-index":2011}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.879122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e switched to configuration voters=()"}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.881202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became follower at term 2"}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.882322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aee9b6e79987349e [peers: [], term: 2, commit: 2011, applied: 0, lastindex: 2011, lastterm: 2]"}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"warn","ts":"2025-02-03T12:27:21.896121Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.900153Z","caller":"mvcc/kvstore.go:346","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1395}
	I0203 12:28:33.137123   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.903670Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":1746}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.910428Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.919884Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"aee9b6e79987349e","timeout":"7s"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.920678Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"aee9b6e79987349e"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.922572Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"aee9b6e79987349e","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.923543Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924198Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924288Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924338Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e switched to configuration voters=(12603806138002519198)"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.925111Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bd3b09816c9d03a4","local-member-id":"aee9b6e79987349e","added-peer-id":"aee9b6e79987349e","added-peer-peer-urls":["https://172.25.1.53:2380"]}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.926083Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bd3b09816c9d03a4","local-member-id":"aee9b6e79987349e","cluster-version":"3.5"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.926140Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.926075Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.931282Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.932289Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.25.12.244:2380"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.932461Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.25.12.244:2380"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.932990Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aee9b6e79987349e","initial-advertise-peer-urls":["https://172.25.12.244:2380"],"listen-peer-urls":["https://172.25.12.244:2380"],"advertise-client-urls":["https://172.25.12.244:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.12.244:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.933175Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e is starting a new election at term 2"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became pre-candidate at term 2"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e received MsgPreVoteResp from aee9b6e79987349e at term 2"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became candidate at term 3"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e received MsgVoteResp from aee9b6e79987349e at term 3"}
	I0203 12:28:33.137896   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became leader at term 3"}
	I0203 12:28:33.138183   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aee9b6e79987349e elected leader aee9b6e79987349e at term 3"}
	I0203 12:28:33.138183   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.298589Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aee9b6e79987349e","local-member-attributes":"{Name:multinode-749300 ClientURLs:[https://172.25.12.244:2379]}","request-path":"/0/members/aee9b6e79987349e/attributes","cluster-id":"bd3b09816c9d03a4","publish-timeout":"7s"}
	I0203 12:28:33.138183   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.298815Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0203 12:28:33.138183   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.299061Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0203 12:28:33.138183   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.301663Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0203 12:28:33.138183   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.301847Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0203 12:28:33.138183   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.306842Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0203 12:28:33.138183   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.310094Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0203 12:28:33.138183   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.312993Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0203 12:28:33.138183   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.319087Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.12.244:2379"}
	I0203 12:28:33.144988   13136 logs.go:123] Gathering logs for kube-scheduler [88c40ca9aa3c] ...
	I0203 12:28:33.144988   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c40ca9aa3c"
	I0203 12:28:33.181652   13136 command_runner.go:130] ! I0203 12:04:50.173813       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:33.181964   13136 command_runner.go:130] ! W0203 12:04:52.061949       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0203 12:28:33.181964   13136 command_runner.go:130] ! W0203 12:04:52.062136       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:33.182060   13136 command_runner.go:130] ! W0203 12:04:52.062240       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0203 12:28:33.182060   13136 command_runner.go:130] ! W0203 12:04:52.062322       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0203 12:28:33.182060   13136 command_runner.go:130] ! I0203 12:04:52.183111       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0203 12:28:33.182060   13136 command_runner.go:130] ! I0203 12:04:52.183265       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:33.182134   13136 command_runner.go:130] ! I0203 12:04:52.186981       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0203 12:28:33.182174   13136 command_runner.go:130] ! I0203 12:04:52.187238       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 12:28:33.182251   13136 command_runner.go:130] ! I0203 12:04:52.187329       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:33.182251   13136 command_runner.go:130] ! I0203 12:04:52.190286       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:33.182251   13136 command_runner.go:130] ! W0203 12:04:52.193791       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0203 12:28:33.182251   13136 command_runner.go:130] ! E0203 12:04:52.193853       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182251   13136 command_runner.go:130] ! W0203 12:04:52.194153       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0203 12:28:33.182251   13136 command_runner.go:130] ! E0203 12:04:52.194308       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182251   13136 command_runner.go:130] ! W0203 12:04:52.194637       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:33.182251   13136 command_runner.go:130] ! E0203 12:04:52.195017       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182251   13136 command_runner.go:130] ! W0203 12:04:52.194800       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0203 12:28:33.182251   13136 command_runner.go:130] ! E0203 12:04:52.195139       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182251   13136 command_runner.go:130] ! W0203 12:04:52.194975       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0203 12:28:33.182251   13136 command_runner.go:130] ! E0203 12:04:52.195284       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182251   13136 command_runner.go:130] ! W0203 12:04:52.196729       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0203 12:28:33.182251   13136 command_runner.go:130] ! E0203 12:04:52.197161       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182251   13136 command_runner.go:130] ! W0203 12:04:52.196961       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0203 12:28:33.182781   13136 command_runner.go:130] ! E0203 12:04:52.197453       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182822   13136 command_runner.go:130] ! W0203 12:04:52.197005       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:33.182822   13136 command_runner.go:130] ! E0203 12:04:52.197828       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182822   13136 command_runner.go:130] ! W0203 12:04:52.197050       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0203 12:28:33.182822   13136 command_runner.go:130] ! E0203 12:04:52.198044       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182822   13136 command_runner.go:130] ! W0203 12:04:52.197096       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0203 12:28:33.182822   13136 command_runner.go:130] ! E0203 12:04:52.198641       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182822   13136 command_runner.go:130] ! W0203 12:04:52.200812       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:33.182822   13136 command_runner.go:130] ! E0203 12:04:52.201002       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0203 12:28:33.182822   13136 command_runner.go:130] ! W0203 12:04:52.201197       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0203 12:28:33.182822   13136 command_runner.go:130] ! E0203 12:04:52.201287       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182822   13136 command_runner.go:130] ! W0203 12:04:52.201462       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:33.182822   13136 command_runner.go:130] ! E0203 12:04:52.201749       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182822   13136 command_runner.go:130] ! W0203 12:04:52.203997       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0203 12:28:33.182822   13136 command_runner.go:130] ! E0203 12:04:52.204039       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183392   13136 command_runner.go:130] ! W0203 12:04:52.204263       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:33.183440   13136 command_runner.go:130] ! E0203 12:04:52.204370       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183482   13136 command_runner.go:130] ! W0203 12:04:52.204862       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:33.183527   13136 command_runner.go:130] ! E0203 12:04:52.205088       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183568   13136 command_runner.go:130] ! W0203 12:04:53.007728       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:33.183615   13136 command_runner.go:130] ! E0203 12:04:53.008599       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183663   13136 command_runner.go:130] ! W0203 12:04:53.048183       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0203 12:28:33.183749   13136 command_runner.go:130] ! E0203 12:04:53.048434       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183796   13136 command_runner.go:130] ! W0203 12:04:53.164447       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0203 12:28:33.183837   13136 command_runner.go:130] ! E0203 12:04:53.165061       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183837   13136 command_runner.go:130] ! W0203 12:04:53.169067       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0203 12:28:33.183837   13136 command_runner.go:130] ! E0203 12:04:53.169917       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183837   13136 command_runner.go:130] ! W0203 12:04:53.247439       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:33.183837   13136 command_runner.go:130] ! E0203 12:04:53.247628       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183837   13136 command_runner.go:130] ! W0203 12:04:53.427203       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0203 12:28:33.183837   13136 command_runner.go:130] ! E0203 12:04:53.427543       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183837   13136 command_runner.go:130] ! W0203 12:04:53.471735       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:33.183837   13136 command_runner.go:130] ! E0203 12:04:53.471980       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183837   13136 command_runner.go:130] ! W0203 12:04:53.482216       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0203 12:28:33.183837   13136 command_runner.go:130] ! E0203 12:04:53.482267       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183837   13136 command_runner.go:130] ! W0203 12:04:53.497579       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0203 12:28:33.183837   13136 command_runner.go:130] ! E0203 12:04:53.497628       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183837   13136 command_runner.go:130] ! W0203 12:04:53.544588       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:33.184373   13136 command_runner.go:130] ! E0203 12:04:53.545097       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0203 12:28:33.184425   13136 command_runner.go:130] ! W0203 12:04:53.614992       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0203 12:28:33.184462   13136 command_runner.go:130] ! E0203 12:04:53.615323       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.184538   13136 command_runner.go:130] ! W0203 12:04:53.655102       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0203 12:28:33.184597   13136 command_runner.go:130] ! E0203 12:04:53.655499       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.184626   13136 command_runner.go:130] ! W0203 12:04:53.655303       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0203 12:28:33.184626   13136 command_runner.go:130] ! E0203 12:04:53.656094       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.184626   13136 command_runner.go:130] ! W0203 12:04:53.713710       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:33.184626   13136 command_runner.go:130] ! E0203 12:04:53.713767       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.184626   13136 command_runner.go:130] ! W0203 12:04:53.764352       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0203 12:28:33.184626   13136 command_runner.go:130] ! E0203 12:04:53.764706       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.184626   13136 command_runner.go:130] ! W0203 12:04:53.799751       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:33.184626   13136 command_runner.go:130] ! E0203 12:04:53.800034       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.184626   13136 command_runner.go:130] ! I0203 12:04:56.288855       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:33.184626   13136 command_runner.go:130] ! I0203 12:25:02.182209       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0203 12:28:33.184626   13136 command_runner.go:130] ! I0203 12:25:02.205551       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 12:28:33.184626   13136 command_runner.go:130] ! I0203 12:25:02.205980       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0203 12:28:33.184626   13136 command_runner.go:130] ! E0203 12:25:02.233103       1 run.go:72] "command failed" err="finished without leader elect"
	I0203 12:28:33.197589   13136 logs.go:123] Gathering logs for kube-proxy [c6dc514e98f6] ...
	I0203 12:28:33.197589   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6dc514e98f6"
	I0203 12:28:33.226851   13136 command_runner.go:130] ! I0203 12:05:01.746820       1 server_linux.go:66] "Using iptables proxy"
	I0203 12:28:33.226920   13136 command_runner.go:130] ! E0203 12:05:01.780088       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:33.226920   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0203 12:28:33.226920   13136 command_runner.go:130] ! 	add table ip kube-proxy
	I0203 12:28:33.226920   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:33.226920   13136 command_runner.go:130] !  >
	I0203 12:28:33.226920   13136 command_runner.go:130] ! E0203 12:05:01.805329       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:33.226920   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0203 12:28:33.226920   13136 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0203 12:28:33.226920   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:33.226920   13136 command_runner.go:130] !  >
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.822582       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.1.53"]
	I0203 12:28:33.226920   13136 command_runner.go:130] ! E0203 12:05:01.822737       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.878001       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.878049       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.878079       1 server_linux.go:170] "Using iptables Proxier"
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.883741       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.884139       1 server.go:497] "Version info" version="v1.32.1"
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.884172       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.886194       1 config.go:199] "Starting service config controller"
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.886246       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.886272       1 config.go:105] "Starting endpoint slice config controller"
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.886277       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.886976       1 config.go:329] "Starting node config controller"
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.887004       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.987328       1 shared_informer.go:320] Caches are synced for node config
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.987379       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.987536       1 shared_informer.go:320] Caches are synced for service config
	I0203 12:28:33.230083   13136 logs.go:123] Gathering logs for kindnet [644890f5738e] ...
	I0203 12:28:33.230600   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 644890f5738e"
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:27:27.922584       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:27:27.925544       1 main.go:139] hostIP = 172.25.12.244
	I0203 12:28:33.257414   13136 command_runner.go:130] ! podIP = 172.25.12.244
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:27:27.925723       1 main.go:148] setting mtu 1500 for CNI 
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:27:27.925791       1 main.go:178] kindnetd IP family: "ipv4"
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:27:27.925960       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:27:28.656536       1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
	I0203 12:28:33.257414   13136 command_runner.go:130] ! add table inet kindnet-network-policies
	I0203 12:28:33.257414   13136 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:33.257414   13136 command_runner.go:130] ! , skipping network policies
	I0203 12:28:33.257414   13136 command_runner.go:130] ! W0203 12:27:58.664159       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0203 12:28:33.257414   13136 command_runner.go:130] ! E0203 12:27:58.664461       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:08.665271       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:08.665332       1 main.go:301] handling current node
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:08.666606       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:08.666704       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:08.667036       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.25.8.35 Flags: [] Table: 0 Realm: 0} 
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:08.667510       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:08.667530       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:08.668238       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.0.54 Flags: [] Table: 0 Realm: 0} 
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:18.657872       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:18.658001       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:18.658271       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:18.658397       1 main.go:301] handling current node
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:18.658413       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:18.658420       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:28.657620       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:28.658189       1 main.go:301] handling current node
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:28.658424       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:28.658517       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:33.257951   13136 command_runner.go:130] ! I0203 12:28:28.658702       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:33.257951   13136 command_runner.go:130] ! I0203 12:28:28.659037       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:33.261308   13136 logs.go:123] Gathering logs for coredns [edb5f00f1042] ...
	I0203 12:28:33.261393   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edb5f00f1042"
	I0203 12:28:33.288244   13136 command_runner.go:130] > .:53
	I0203 12:28:33.288244   13136 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3e8130cfa8e96169e54fdb81903f9b4680c96074b93281de316a617894d613269c265db78cbf1be00f04df6f27627d689838921ad115c7f1fadc26b632a43f17
	I0203 12:28:33.288244   13136 command_runner.go:130] > CoreDNS-1.11.3
	I0203 12:28:33.288244   13136 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0203 12:28:33.288244   13136 command_runner.go:130] > [INFO] 127.0.0.1:49536 - 20223 "HINFO IN 8316577845745372206.6425600211286211531. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049207769s
	I0203 12:28:33.290200   13136 logs.go:123] Gathering logs for kube-apiserver [6c19e0a0ba9c] ...
	I0203 12:28:33.290200   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c19e0a0ba9c"
	I0203 12:28:33.321760   13136 command_runner.go:130] ! W0203 12:27:22.209566       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:22.212385       1 options.go:238] external host was not specified, using 172.25.12.244
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:22.215411       1 server.go:143] Version: v1.32.1
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:22.215519       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:22.961695       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:22.981400       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:22.991076       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:22.991179       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:22.995374       1 instance.go:233] Using reconciler: lease
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:23.455051       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0203 12:28:33.321760   13136 command_runner.go:130] ! W0203 12:27:23.455431       1 genericapiserver.go:767] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:23.772863       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:23.773118       1 apis.go:106] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:24.011206       1 apis.go:106] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:24.156938       1 apis.go:106] API group "resource.k8s.io" is not enabled, skipping.
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:24.167831       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0203 12:28:33.321760   13136 command_runner.go:130] ! W0203 12:27:24.167952       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.321760   13136 command_runner.go:130] ! W0203 12:27:24.167965       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:24.168630       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0203 12:28:33.321760   13136 command_runner.go:130] ! W0203 12:27:24.168731       1 genericapiserver.go:767] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:24.169810       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:24.170800       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0203 12:28:33.321760   13136 command_runner.go:130] ! W0203 12:27:24.170918       1 genericapiserver.go:767] Skipping API autoscaling/v2beta1 because it has no resources.
	I0203 12:28:33.322297   13136 command_runner.go:130] ! W0203 12:27:24.170928       1 genericapiserver.go:767] Skipping API autoscaling/v2beta2 because it has no resources.
	I0203 12:28:33.322297   13136 command_runner.go:130] ! I0203 12:27:24.172706       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0203 12:28:33.322297   13136 command_runner.go:130] ! W0203 12:27:24.172818       1 genericapiserver.go:767] Skipping API batch/v1beta1 because it has no resources.
	I0203 12:28:33.322385   13136 command_runner.go:130] ! I0203 12:27:24.173842       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0203 12:28:33.322385   13136 command_runner.go:130] ! W0203 12:27:24.173955       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.322385   13136 command_runner.go:130] ! W0203 12:27:24.173976       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:33.322385   13136 command_runner.go:130] ! I0203 12:27:24.174699       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0203 12:28:33.322463   13136 command_runner.go:130] ! W0203 12:27:24.174807       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.322463   13136 command_runner.go:130] ! W0203 12:27:24.174815       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1alpha2 because it has no resources.
	I0203 12:28:33.322463   13136 command_runner.go:130] ! I0203 12:27:24.175562       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0203 12:28:33.322463   13136 command_runner.go:130] ! W0203 12:27:24.175675       1 genericapiserver.go:767] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.322542   13136 command_runner.go:130] ! I0203 12:27:24.177712       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0203 12:28:33.322542   13136 command_runner.go:130] ! W0203 12:27:24.177817       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.322542   13136 command_runner.go:130] ! W0203 12:27:24.177827       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:33.322542   13136 command_runner.go:130] ! I0203 12:27:24.178337       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0203 12:28:33.322542   13136 command_runner.go:130] ! W0203 12:27:24.178525       1 genericapiserver.go:767] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.322618   13136 command_runner.go:130] ! W0203 12:27:24.178534       1 genericapiserver.go:767] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:33.322618   13136 command_runner.go:130] ! I0203 12:27:24.179521       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0203 12:28:33.322618   13136 command_runner.go:130] ! W0203 12:27:24.179622       1 genericapiserver.go:767] Skipping API policy/v1beta1 because it has no resources.
	I0203 12:28:33.322618   13136 command_runner.go:130] ! I0203 12:27:24.181744       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0203 12:28:33.322618   13136 command_runner.go:130] ! W0203 12:27:24.181838       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.322697   13136 command_runner.go:130] ! W0203 12:27:24.181848       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:33.322697   13136 command_runner.go:130] ! I0203 12:27:24.182574       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0203 12:28:33.322697   13136 command_runner.go:130] ! W0203 12:27:24.182612       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.322697   13136 command_runner.go:130] ! W0203 12:27:24.182619       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:33.322775   13136 command_runner.go:130] ! I0203 12:27:24.185237       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0203 12:28:33.322775   13136 command_runner.go:130] ! W0203 12:27:24.185340       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.322775   13136 command_runner.go:130] ! W0203 12:27:24.185438       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:33.322775   13136 command_runner.go:130] ! I0203 12:27:24.187067       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0203 12:28:33.322775   13136 command_runner.go:130] ! W0203 12:27:24.187189       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta3 because it has no resources.
	I0203 12:28:33.322858   13136 command_runner.go:130] ! W0203 12:27:24.187200       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0203 12:28:33.322858   13136 command_runner.go:130] ! W0203 12:27:24.187204       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.322858   13136 command_runner.go:130] ! I0203 12:27:24.193311       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0203 12:28:33.322858   13136 command_runner.go:130] ! W0203 12:27:24.193504       1 genericapiserver.go:767] Skipping API apps/v1beta2 because it has no resources.
	I0203 12:28:33.322858   13136 command_runner.go:130] ! W0203 12:27:24.193516       1 genericapiserver.go:767] Skipping API apps/v1beta1 because it has no resources.
	I0203 12:28:33.322858   13136 command_runner.go:130] ! I0203 12:27:24.195828       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0203 12:28:33.322942   13136 command_runner.go:130] ! W0203 12:27:24.195943       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.322942   13136 command_runner.go:130] ! W0203 12:27:24.195952       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:33.322942   13136 command_runner.go:130] ! I0203 12:27:24.196821       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0203 12:28:33.322942   13136 command_runner.go:130] ! W0203 12:27:24.196925       1 genericapiserver.go:767] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.322942   13136 command_runner.go:130] ! I0203 12:27:24.210087       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0203 12:28:33.323025   13136 command_runner.go:130] ! W0203 12:27:24.210106       1 genericapiserver.go:767] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.323025   13136 command_runner.go:130] ! I0203 12:27:24.794572       1 secure_serving.go:213] Serving securely on [::]:8443
	I0203 12:28:33.323025   13136 command_runner.go:130] ! I0203 12:27:24.794794       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0203 12:28:33.323102   13136 command_runner.go:130] ! I0203 12:27:24.795068       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:33.323102   13136 command_runner.go:130] ! I0203 12:27:24.795407       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:33.323102   13136 command_runner.go:130] ! I0203 12:27:24.802046       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:33.323102   13136 command_runner.go:130] ! I0203 12:27:24.802388       1 local_available_controller.go:156] Starting LocalAvailability controller
	I0203 12:28:33.323102   13136 command_runner.go:130] ! I0203 12:27:24.802453       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I0203 12:28:33.323102   13136 command_runner.go:130] ! I0203 12:27:24.803591       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I0203 12:28:33.323181   13136 command_runner.go:130] ! I0203 12:27:24.803646       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0203 12:28:33.323181   13136 command_runner.go:130] ! I0203 12:27:24.803948       1 controller.go:78] Starting OpenAPI AggregationController
	I0203 12:28:33.323181   13136 command_runner.go:130] ! I0203 12:27:24.804549       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0203 12:28:33.323181   13136 command_runner.go:130] ! I0203 12:27:24.805072       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I0203 12:28:33.323257   13136 command_runner.go:130] ! I0203 12:27:24.805137       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I0203 12:28:33.323257   13136 command_runner.go:130] ! I0203 12:27:24.805149       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0203 12:28:33.323257   13136 command_runner.go:130] ! I0203 12:27:24.805622       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I0203 12:28:33.323337   13136 command_runner.go:130] ! I0203 12:27:24.805888       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I0203 12:28:33.323337   13136 command_runner.go:130] ! I0203 12:27:24.806059       1 aggregator.go:169] waiting for initial CRD sync...
	I0203 12:28:33.323337   13136 command_runner.go:130] ! I0203 12:27:24.806071       1 cluster_authentication_trust_controller.go:462] Starting cluster_authentication_trust_controller controller
	I0203 12:28:33.323337   13136 command_runner.go:130] ! I0203 12:27:24.806336       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0203 12:28:33.323337   13136 command_runner.go:130] ! I0203 12:27:24.815482       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:33.323413   13136 command_runner.go:130] ! I0203 12:27:24.815778       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:33.323413   13136 command_runner.go:130] ! I0203 12:27:24.857328       1 controller.go:142] Starting OpenAPI controller
	I0203 12:28:33.323413   13136 command_runner.go:130] ! I0203 12:27:24.857674       1 controller.go:90] Starting OpenAPI V3 controller
	I0203 12:28:33.323413   13136 command_runner.go:130] ! I0203 12:27:24.857889       1 naming_controller.go:294] Starting NamingConditionController
	I0203 12:28:33.323413   13136 command_runner.go:130] ! I0203 12:27:24.858090       1 establishing_controller.go:81] Starting EstablishingController
	I0203 12:28:33.323493   13136 command_runner.go:130] ! I0203 12:27:24.858264       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0203 12:28:33.323493   13136 command_runner.go:130] ! I0203 12:27:24.858511       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0203 12:28:33.323493   13136 command_runner.go:130] ! I0203 12:27:24.858696       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0203 12:28:33.323493   13136 command_runner.go:130] ! I0203 12:27:24.805624       1 controller.go:119] Starting legacy_token_tracking_controller
	I0203 12:28:33.323493   13136 command_runner.go:130] ! I0203 12:27:24.859559       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0203 12:28:33.323569   13136 command_runner.go:130] ! I0203 12:27:24.859779       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0203 12:28:33.323569   13136 command_runner.go:130] ! I0203 12:27:24.859901       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0203 12:28:33.323569   13136 command_runner.go:130] ! I0203 12:27:24.805642       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0203 12:28:33.323569   13136 command_runner.go:130] ! I0203 12:27:24.805842       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I0203 12:28:33.323569   13136 command_runner.go:130] ! I0203 12:27:24.960247       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0203 12:28:33.323569   13136 command_runner.go:130] ! I0203 12:27:24.962958       1 aggregator.go:171] initial CRD sync complete...
	I0203 12:28:33.323648   13136 command_runner.go:130] ! I0203 12:27:24.963020       1 autoregister_controller.go:144] Starting autoregister controller
	I0203 12:28:33.323648   13136 command_runner.go:130] ! I0203 12:27:24.963034       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0203 12:28:33.323648   13136 command_runner.go:130] ! I0203 12:27:24.983465       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0203 12:28:33.323648   13136 command_runner.go:130] ! I0203 12:27:24.983682       1 policy_source.go:240] refreshing policies
	I0203 12:28:33.323648   13136 command_runner.go:130] ! I0203 12:27:24.988524       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0203 12:28:33.323724   13136 command_runner.go:130] ! I0203 12:27:25.002635       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0203 12:28:33.323724   13136 command_runner.go:130] ! I0203 12:27:25.006114       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0203 12:28:33.323724   13136 command_runner.go:130] ! I0203 12:27:25.007504       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0203 12:28:33.323724   13136 command_runner.go:130] ! I0203 12:27:25.021232       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0203 12:28:33.323724   13136 command_runner.go:130] ! I0203 12:27:25.021549       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0203 12:28:33.323803   13136 command_runner.go:130] ! I0203 12:27:25.021784       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0203 12:28:33.323803   13136 command_runner.go:130] ! I0203 12:27:25.040252       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0203 12:28:33.323803   13136 command_runner.go:130] ! I0203 12:27:25.063391       1 cache.go:39] Caches are synced for autoregister controller
	I0203 12:28:33.323803   13136 command_runner.go:130] ! I0203 12:27:25.063942       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0203 12:28:33.323803   13136 command_runner.go:130] ! I0203 12:27:25.064322       1 shared_informer.go:320] Caches are synced for configmaps
	I0203 12:28:33.323879   13136 command_runner.go:130] ! I0203 12:27:25.809340       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0203 12:28:33.323879   13136 command_runner.go:130] ! I0203 12:27:25.881836       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0203 12:28:33.323879   13136 command_runner.go:130] ! W0203 12:27:26.443758       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.25.12.244]
	I0203 12:28:33.323879   13136 command_runner.go:130] ! I0203 12:27:26.447833       1 controller.go:615] quota admission added evaluator for: endpoints
	I0203 12:28:33.323879   13136 command_runner.go:130] ! I0203 12:27:26.461396       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0203 12:28:33.323879   13136 command_runner.go:130] ! I0203 12:27:27.972522       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0203 12:28:33.323960   13136 command_runner.go:130] ! I0203 12:27:28.290141       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0203 12:28:33.323960   13136 command_runner.go:130] ! I0203 12:27:28.509424       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0203 12:28:33.323960   13136 command_runner.go:130] ! I0203 12:27:28.520726       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0203 12:28:33.323960   13136 command_runner.go:130] ! I0203 12:27:28.561004       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0203 12:28:33.332197   13136 logs.go:123] Gathering logs for coredns [fe91a8d012ae] ...
	I0203 12:28:33.332197   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe91a8d012ae"
	I0203 12:28:33.362203   13136 command_runner.go:130] > .:53
	I0203 12:28:33.362203   13136 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3e8130cfa8e96169e54fdb81903f9b4680c96074b93281de316a617894d613269c265db78cbf1be00f04df6f27627d689838921ad115c7f1fadc26b632a43f17
	I0203 12:28:33.362203   13136 command_runner.go:130] > CoreDNS-1.11.3
	I0203 12:28:33.362203   13136 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0203 12:28:33.362203   13136 command_runner.go:130] > [INFO] 127.0.0.1:49376 - 54533 "HINFO IN 5545318737342419956.4498205497283969299. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.271697251s
	I0203 12:28:33.362203   13136 command_runner.go:130] > [INFO] 10.244.1.2:43143 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000594006s
	I0203 12:28:33.362203   13136 command_runner.go:130] > [INFO] 10.244.1.2:44943 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.183348242s
	I0203 12:28:33.362203   13136 command_runner.go:130] > [INFO] 10.244.1.2:36646 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.156236585s
	I0203 12:28:33.362203   13136 command_runner.go:130] > [INFO] 10.244.1.2:58135 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.085964402s
	I0203 12:28:33.362203   13136 command_runner.go:130] > [INFO] 10.244.0.3:55647 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000429704s
	I0203 12:28:33.362203   13136 command_runner.go:130] > [INFO] 10.244.0.3:43653 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000173402s
	I0203 12:28:33.362425   13136 command_runner.go:130] > [INFO] 10.244.0.3:39125 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000093801s
	I0203 12:28:33.362425   13136 command_runner.go:130] > [INFO] 10.244.0.3:43285 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000234602s
	I0203 12:28:33.362425   13136 command_runner.go:130] > [INFO] 10.244.1.2:49861 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157602s
	I0203 12:28:33.362425   13136 command_runner.go:130] > [INFO] 10.244.1.2:59079 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024886436s
	I0203 12:28:33.362425   13136 command_runner.go:130] > [INFO] 10.244.1.2:56014 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155402s
	I0203 12:28:33.362425   13136 command_runner.go:130] > [INFO] 10.244.1.2:49501 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115101s
	I0203 12:28:33.362425   13136 command_runner.go:130] > [INFO] 10.244.1.2:59809 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.029540479s
	I0203 12:28:33.362517   13136 command_runner.go:130] > [INFO] 10.244.1.2:45190 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184901s
	I0203 12:28:33.362517   13136 command_runner.go:130] > [INFO] 10.244.1.2:58561 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000207002s
	I0203 12:28:33.362517   13136 command_runner.go:130] > [INFO] 10.244.1.2:54547 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108101s
	I0203 12:28:33.362517   13136 command_runner.go:130] > [INFO] 10.244.0.3:52767 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140901s
	I0203 12:28:33.362517   13136 command_runner.go:130] > [INFO] 10.244.0.3:48199 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000275502s
	I0203 12:28:33.362608   13136 command_runner.go:130] > [INFO] 10.244.0.3:40769 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194202s
	I0203 12:28:33.362608   13136 command_runner.go:130] > [INFO] 10.244.0.3:56613 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000241303s
	I0203 12:28:33.362608   13136 command_runner.go:130] > [INFO] 10.244.0.3:36390 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000127501s
	I0203 12:28:33.362608   13136 command_runner.go:130] > [INFO] 10.244.0.3:49253 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150501s
	I0203 12:28:33.362688   13136 command_runner.go:130] > [INFO] 10.244.0.3:53291 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115601s
	I0203 12:28:33.362688   13136 command_runner.go:130] > [INFO] 10.244.0.3:37098 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000782s
	I0203 12:28:33.362727   13136 command_runner.go:130] > [INFO] 10.244.1.2:47927 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154002s
	I0203 12:28:33.362727   13136 command_runner.go:130] > [INFO] 10.244.1.2:49855 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156202s
	I0203 12:28:33.362727   13136 command_runner.go:130] > [INFO] 10.244.1.2:51176 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114201s
	I0203 12:28:33.362727   13136 command_runner.go:130] > [INFO] 10.244.1.2:45626 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156701s
	I0203 12:28:33.362802   13136 command_runner.go:130] > [INFO] 10.244.0.3:33142 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141402s
	I0203 12:28:33.362802   13136 command_runner.go:130] > [INFO] 10.244.0.3:36637 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000249602s
	I0203 12:28:33.362802   13136 command_runner.go:130] > [INFO] 10.244.0.3:34293 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135301s
	I0203 12:28:33.362802   13136 command_runner.go:130] > [INFO] 10.244.0.3:59245 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112701s
	I0203 12:28:33.362884   13136 command_runner.go:130] > [INFO] 10.244.1.2:56139 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200702s
	I0203 12:28:33.362884   13136 command_runner.go:130] > [INFO] 10.244.1.2:53567 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131301s
	I0203 12:28:33.362884   13136 command_runner.go:130] > [INFO] 10.244.1.2:55778 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000182502s
	I0203 12:28:33.362884   13136 command_runner.go:130] > [INFO] 10.244.1.2:53486 - 5 "PTR IN 1.0.25.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000163702s
	I0203 12:28:33.362884   13136 command_runner.go:130] > [INFO] 10.244.0.3:52745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191702s
	I0203 12:28:33.362884   13136 command_runner.go:130] > [INFO] 10.244.0.3:38587 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132301s
	I0203 12:28:33.362974   13136 command_runner.go:130] > [INFO] 10.244.0.3:53685 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078101s
	I0203 12:28:33.362974   13136 command_runner.go:130] > [INFO] 10.244.0.3:38406 - 5 "PTR IN 1.0.25.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000076301s
	I0203 12:28:33.362974   13136 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0203 12:28:33.362974   13136 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0203 12:28:33.365786   13136 logs.go:123] Gathering logs for Docker ...
	I0203 12:28:33.365786   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0203 12:28:33.398090   13136 command_runner.go:130] > Feb 03 12:25:59 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:33.398090   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:33.398180   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:33.398180   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:33.398180   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0203 12:28:33.398180   13136 command_runner.go:130] > Feb 03 12:26:00 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:33.398180   13136 command_runner.go:130] > Feb 03 12:26:00 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:33.398311   13136 command_runner.go:130] > Feb 03 12:26:00 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:33.398311   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0203 12:28:33.398311   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0203 12:28:33.398311   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:33.398400   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:33.398400   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:33.398400   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:33.398400   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0203 12:28:33.398479   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:33.398479   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:33.398479   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:33.398555   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0203 12:28:33.398555   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0203 12:28:33.398555   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:33.398555   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:33.398555   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:33.398635   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:33.398635   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0203 12:28:33.398635   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:33.398635   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:33.398712   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:33.398712   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0203 12:28:33.398712   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0203 12:28:33.398712   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0203 12:28:33.398712   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:33.398788   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:33.398788   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 systemd[1]: Starting Docker Application Container Engine...
	I0203 12:28:33.398788   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[651]: time="2025-02-03T12:26:45.380727146Z" level=info msg="Starting up"
	I0203 12:28:33.398864   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[651]: time="2025-02-03T12:26:45.381865516Z" level=info msg="containerd not running, starting managed containerd"
	I0203 12:28:33.398864   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[651]: time="2025-02-03T12:26:45.382773073Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=657
	I0203 12:28:33.398864   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.412550323Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0203 12:28:33.398941   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440135738Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0203 12:28:33.398941   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440206542Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0203 12:28:33.399017   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440329250Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0203 12:28:33.399017   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440352551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.399017   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441207804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:33.399091   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441394816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.399091   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441695635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:33.399165   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441819442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.399165   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441843144Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:33.399165   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441855545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.399165   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.442535887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.399241   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.443428142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.399241   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.446651543Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:33.399315   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.446752549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.399390   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.446913259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:33.399390   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.447005465Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0203 12:28:33.399390   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.447482194Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0203 12:28:33.399473   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.447592401Z" level=info msg="metadata content store policy set" policy=shared
	I0203 12:28:33.399473   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452471104Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0203 12:28:33.399473   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452580211Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0203 12:28:33.399473   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452605613Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0203 12:28:33.399548   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452624714Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0203 12:28:33.399548   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452641915Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0203 12:28:33.399548   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452717520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0203 12:28:33.399625   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453010238Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0203 12:28:33.399625   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453128145Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0203 12:28:33.399666   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453147046Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0203 12:28:33.399666   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453162147Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0203 12:28:33.399702   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453177448Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453199850Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453215851Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453237552Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453360460Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453415663Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453522870Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453541271Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453563972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453580773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453596174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453611675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453625276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453640377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453653878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453667779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453687080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453703481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453716682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453729883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453743884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453761485Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453785086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453804587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453818788Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453867591Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453971798Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0203 12:28:33.400294   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454021201Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0203 12:28:33.400294   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454132008Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0203 12:28:33.400378   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454147409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.400378   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454163610Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0203 12:28:33.400378   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454175210Z" level=info msg="NRI interface is disabled by configuration."
	I0203 12:28:33.400378   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454622938Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0203 12:28:33.400460   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454857953Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0203 12:28:33.400495   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454980660Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0203 12:28:33.400495   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.455105168Z" level=info msg="containerd successfully booted in 0.044680s"
	I0203 12:28:33.400495   13136 command_runner.go:130] > Feb 03 12:26:46 multinode-749300 dockerd[651]: time="2025-02-03T12:26:46.439313185Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0203 12:28:33.400564   13136 command_runner.go:130] > Feb 03 12:26:46 multinode-749300 dockerd[651]: time="2025-02-03T12:26:46.630975852Z" level=info msg="Loading containers: start."
	I0203 12:28:33.400564   13136 command_runner.go:130] > Feb 03 12:26:46 multinode-749300 dockerd[651]: time="2025-02-03T12:26:46.949194693Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0203 12:28:33.400640   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.095120348Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0203 12:28:33.400640   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.212617937Z" level=info msg="Loading containers: done."
	I0203 12:28:33.400640   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.238410035Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0203 12:28:33.400640   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.238496541Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0203 12:28:33.400717   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.238529943Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0203 12:28:33.400717   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.239396503Z" level=info msg="Daemon has completed initialization"
	I0203 12:28:33.400717   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.279910027Z" level=info msg="API listen on /var/run/docker.sock"
	I0203 12:28:33.400792   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 systemd[1]: Started Docker Application Container Engine.
	I0203 12:28:33.400792   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.280075738Z" level=info msg="API listen on [::]:2376"
	I0203 12:28:33.400792   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.298017161Z" level=info msg="Processing signal 'terminated'"
	I0203 12:28:33.400792   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 systemd[1]: Stopping Docker Application Container Engine...
	I0203 12:28:33.400792   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.300466075Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0203 12:28:33.400871   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.301181479Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0203 12:28:33.400871   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.301265080Z" level=info msg="Daemon shutdown complete"
	I0203 12:28:33.400871   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.301434281Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0203 12:28:33.400871   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 systemd[1]: docker.service: Deactivated successfully.
	I0203 12:28:33.400959   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 systemd[1]: Stopped Docker Application Container Engine.
	I0203 12:28:33.400959   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 systemd[1]: Starting Docker Application Container Engine...
	I0203 12:28:33.400959   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:12.352956833Z" level=info msg="Starting up"
	I0203 12:28:33.400959   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:12.353893039Z" level=info msg="containerd not running, starting managed containerd"
	I0203 12:28:33.400959   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:12.356231552Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1107
	I0203 12:28:33.400959   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.387763834Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0203 12:28:33.401074   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415379693Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0203 12:28:33.401074   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415427893Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0203 12:28:33.401074   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415503993Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0203 12:28:33.401074   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415521293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.401074   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415552594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:33.401187   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415571594Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.401187   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415753695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:33.401187   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415875095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.401270   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415895996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:33.401270   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415907496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.401270   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415998596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.401347   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.416122597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.401347   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419383016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:33.401347   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419448316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.401427   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419602317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:33.401427   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419703417Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0203 12:28:33.401427   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419732118Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0203 12:28:33.401506   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419761418Z" level=info msg="metadata content store policy set" policy=shared
	I0203 12:28:33.401506   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420025019Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0203 12:28:33.401506   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420117020Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0203 12:28:33.401581   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420135220Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0203 12:28:33.401581   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420150320Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0203 12:28:33.401581   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420168320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0203 12:28:33.401581   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420220020Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0203 12:28:33.401655   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420554522Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0203 12:28:33.401655   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420715123Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0203 12:28:33.401655   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420811824Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0203 12:28:33.401655   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420833624Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0203 12:28:33.401759   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420853524Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.401759   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420879824Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.401820   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420897724Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.401820   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420912624Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.401866   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420991825Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.401893   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421007125Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.401893   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421021725Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.401893   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421034325Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.401893   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421059025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.401893   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421075725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.401990   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421090525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.401990   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421104726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.401990   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421118126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.401990   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421132126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.401990   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421150126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.402108   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421166226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.402108   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421188326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.402108   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421206126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.402108   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421218626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.402202   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421231326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.402202   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421244126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.402202   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421262126Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0203 12:28:33.402202   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421286927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.402202   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421299927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.402320   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421316127Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0203 12:28:33.402320   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421657629Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0203 12:28:33.402320   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421699929Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0203 12:28:33.402320   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421719729Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0203 12:28:33.402445   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421738629Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0203 12:28:33.402445   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421749929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.402522   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421767729Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0203 12:28:33.402522   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421781429Z" level=info msg="NRI interface is disabled by configuration."
	I0203 12:28:33.402522   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422100631Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0203 12:28:33.402522   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422251132Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0203 12:28:33.402600   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422392333Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0203 12:28:33.402600   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422418033Z" level=info msg="containerd successfully booted in 0.035603s"
	I0203 12:28:33.402600   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.403475080Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0203 12:28:33.402600   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.431623642Z" level=info msg="Loading containers: start."
	I0203 12:28:33.402675   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.675130644Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0203 12:28:33.402749   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.788922499Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0203 12:28:33.402749   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.907280980Z" level=info msg="Loading containers: done."
	I0203 12:28:33.402749   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.932910027Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0203 12:28:33.402749   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.932994128Z" level=info msg="Daemon has completed initialization"
	I0203 12:28:33.402824   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.970542044Z" level=info msg="API listen on /var/run/docker.sock"
	I0203 12:28:33.402824   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.970691945Z" level=info msg="API listen on [::]:2376"
	I0203 12:28:33.402824   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 systemd[1]: Started Docker Application Container Engine.
	I0203 12:28:33.402824   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:33.402898   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:33.402898   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:33.402898   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:33.402974   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0203 12:28:33.402974   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Loaded network plugin cni"
	I0203 12:28:33.403006   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0203 12:28:33.403132   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Start cri-dockerd grpc backend"
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:19Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-58667487b6-zgvmd_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"efcd217a3204d8ee4b03ebb412109a32b1b008fc65b7434e2087e8fa5429c03b\""
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:19Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-v2gkp_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"26e5557dc32ce42e41eb095169017d71cd452b2e90ecede8972ab6dfa8c841ac\""
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.731892062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.732069764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.732104064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.732632967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.742524924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.742776225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.742902026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.743145327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787449782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787596483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787637083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787820284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818198959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818289160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818451361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818555561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403777   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/264f9c1c2c05f544f10a0af503e7dfb16c8eaf7dab55a12d747c05df02b07807/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:33.403777   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d8732fe7d2435b888ee9c1bdc8f366b2cd23fe7a47230b5e0b7e6e97547fb30e/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:33.403777   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e2da6b5a5bd1b22ed0d0ef9ab7fd9a0874f1357443511e898b07fbae5f28d3d0/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:33.403852   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc833a943f11f228aa4ef7daceca6bf4fd4096e22ee6354cc8afb177b0dc3db5/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.377130176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.378256483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.378462184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.378972087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.423087341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.424963652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.426916563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.427886269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.440196639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.440916544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.442061550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.442305352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.453876818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.454104020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.454340021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.454632323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:25Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474743418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474833119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474852519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474952220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502675379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.404407   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502746480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.404407   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502760180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.404407   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502846980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.404482   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507587807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.404516   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507657108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.404516   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507682008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507809209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c4912e7d3383ee7e383387115cfa625509cdb8edff08db473311607d723e4d67/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1eece224f54eb90d32ca17e53dec80b8ad8db63a733127cae7ce39832c944127/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c682ff8834bf472070d7ef8557ee1391dcfffd86e9b6a29c668eee4fe700e342/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010215801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010492502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010590603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010742104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.013544220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.013678021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.013710621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.014126823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145033877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145181177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145225278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145314878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:57.589562586Z" level=info msg="ignoring event" container=edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:57.590947498Z" level=info msg="shim disconnected" id=edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578 namespace=moby
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:57.591492803Z" level=warning msg="cleaning up after shim disconnected" id=edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578 namespace=moby
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:57.591599004Z" level=info msg="cleaning up dead shim" namespace=moby
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.013597299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.013673700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.013692300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.405116   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.014212603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.405116   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223402731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.405116   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223571532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.405116   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223587232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.405204   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223671032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.405240   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.236644911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.405271   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.237659918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.405271   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.237678218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.405271   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.238007320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.405271   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:28:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d290c79ddbf8dbaaae0ac6ae29ff1695c351eb244341bb86dfa66bd51e407af5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0203 12:28:33.405271   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:28:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ac5f0bf5197cf2f2f9c600a6d9f77ea7775ba4c80a3a3c30272ea8dc42d9f4e2/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:33.405409   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.741947665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.405448   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.742072666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.405494   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.742088066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.405521   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.742520068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.405558   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783254697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.405558   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783521498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.405592   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783775700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.405642   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783932101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.433844   13136 logs.go:123] Gathering logs for container status ...
	I0203 12:28:33.433844   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 12:28:33.506019   13136 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0203 12:28:33.506019   13136 command_runner.go:130] > edb5f00f10420       c69fa2e9cbf5f                                                                                         3 seconds ago        Running             coredns                   1                   ac5f0bf5197cf       coredns-668d6bf9bc-v2gkp
	I0203 12:28:33.506019   13136 command_runner.go:130] > 0ff3e07f2982f       8c811b4aec35f                                                                                         3 seconds ago        Running             busybox                   1                   d290c79ddbf8d       busybox-58667487b6-zgvmd
	I0203 12:28:33.506019   13136 command_runner.go:130] > 7cbc7a552a4c3       6e38f40d628db                                                                                         23 seconds ago       Running             storage-provisioner       2                   1eece224f54eb       storage-provisioner
	I0203 12:28:33.506019   13136 command_runner.go:130] > 644890f5738e5       d300845f67aeb                                                                                         About a minute ago   Running             kindnet-cni               1                   c682ff8834bf4       kindnet-h6m57
	I0203 12:28:33.506019   13136 command_runner.go:130] > edf3d4284acbb       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   1eece224f54eb       storage-provisioner
	I0203 12:28:33.506019   13136 command_runner.go:130] > cf33452e72443       e29f9c7391fd9                                                                                         About a minute ago   Running             kube-proxy                1                   c4912e7d3383e       kube-proxy-9g92t
	I0203 12:28:33.506019   13136 command_runner.go:130] > 09707a8629658       a9e7e6b294baf                                                                                         About a minute ago   Running             etcd                      0                   fc833a943f11f       etcd-multinode-749300
	I0203 12:28:33.506019   13136 command_runner.go:130] > 2e43c2ecb4a92       2b0d6572d062c                                                                                         About a minute ago   Running             kube-scheduler            1                   e2da6b5a5bd1b       kube-scheduler-multinode-749300
	I0203 12:28:33.506019   13136 command_runner.go:130] > fa5ab1df89857       019ee182b58e2                                                                                         About a minute ago   Running             kube-controller-manager   1                   d8732fe7d2435       kube-controller-manager-multinode-749300
	I0203 12:28:33.506019   13136 command_runner.go:130] > 6c19e0a0ba9c0       95c0bda56fc4d                                                                                         About a minute ago   Running             kube-apiserver            0                   264f9c1c2c05f       kube-apiserver-multinode-749300
	I0203 12:28:33.506019   13136 command_runner.go:130] > f42690726d50f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   efcd217a3204d       busybox-58667487b6-zgvmd
	I0203 12:28:33.506019   13136 command_runner.go:130] > fe91a8d012aee       c69fa2e9cbf5f                                                                                         23 minutes ago       Exited              coredns                   0                   26e5557dc32ce       coredns-668d6bf9bc-v2gkp
	I0203 12:28:33.506689   13136 command_runner.go:130] > fab2d9be6b5c7       kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26              23 minutes ago       Exited              kindnet-cni               0                   cb49b32ba0852       kindnet-h6m57
	I0203 12:28:33.506711   13136 command_runner.go:130] > c6dc514e98f69       e29f9c7391fd9                                                                                         23 minutes ago       Exited              kube-proxy                0                   1ff01fa7d8c67       kube-proxy-9g92t
	I0203 12:28:33.506711   13136 command_runner.go:130] > 8ade10c0fb096       019ee182b58e2                                                                                         23 minutes ago       Exited              kube-controller-manager   0                   b1b473818438d       kube-controller-manager-multinode-749300
	I0203 12:28:33.506711   13136 command_runner.go:130] > 88c40ca9aa3cb       2b0d6572d062c                                                                                         23 minutes ago       Exited              kube-scheduler            0                   d8d9e598659ff       kube-scheduler-multinode-749300
	I0203 12:28:33.509303   13136 logs.go:123] Gathering logs for kubelet ...
	I0203 12:28:33.509303   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 12:28:33.544390   13136 command_runner.go:130] > Feb 03 12:27:15 multinode-749300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0203 12:28:33.544390   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: I0203 12:27:16.085338    1502 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0203 12:28:33.544390   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: I0203 12:27:16.085444    1502 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:33.544390   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: I0203 12:27:16.086383    1502 server.go:954] "Client rotation is on, will bootstrap in background"
	I0203 12:28:33.544390   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: E0203 12:27:16.086828    1502 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0203 12:28:33.544390   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:33.544390   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0203 12:28:33.545304   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: I0203 12:27:16.848200    1552 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: I0203 12:27:16.848394    1552 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: I0203 12:27:16.848741    1552 server.go:954] "Client rotation is on, will bootstrap in background"
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: E0203 12:27:16.848794    1552 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:17 multinode-749300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.655843    1646 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.655920    1646 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.656491    1646 server.go:954] "Client rotation is on, will bootstrap in background"
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.660314    1646 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.685411    1646 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.712367    1646 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.712421    1646 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.719067    1646 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.719190    1646 server.go:841] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720010    1646 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0203 12:28:33.546131   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720060    1646 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-749300","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I0203 12:28:33.546172   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720250    1646 topology_manager.go:138] "Creating topology manager with none policy"
	I0203 12:28:33.546172   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720261    1646 container_manager_linux.go:304] "Creating device plugin manager"
	I0203 12:28:33.546172   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720394    1646 state_mem.go:36] "Initialized new in-memory state store"
	I0203 12:28:33.546263   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722746    1646 kubelet.go:446] "Attempting to sync node with API server"
	I0203 12:28:33.546263   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722858    1646 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0203 12:28:33.546263   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722878    1646 kubelet.go:352] "Adding apiserver pod source"
	I0203 12:28:33.546263   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722889    1646 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0203 12:28:33.546352   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.728476    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:33.546352   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.728558    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:33.546432   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.730384    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:33.546432   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.730414    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:33.546511   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.730516    1646 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="docker" version="27.4.0" apiVersion="v1"
	I0203 12:28:33.546511   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.732095    1646 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0203 12:28:33.546511   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.732504    1646 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0203 12:28:33.546587   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.737572    1646 watchdog_linux.go:99] "Systemd watchdog is not enabled"
	I0203 12:28:33.546587   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.737778    1646 server.go:1287] "Started kubelet"
	I0203 12:28:33.546587   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.742490    1646 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0203 12:28:33.546665   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.747263    1646 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.25.12.244:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-749300.1820b26d8c29f858  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-749300,UID:multinode-749300,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-749300,},FirstTimestamp:2025-02-03 12:27:19.73775164 +0000 UTC m=+0.175845113,LastTimestamp:2025-02-03 12:27:19.73775164 +0000 UTC m=+0.175845113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-7493
00,}"
	I0203 12:28:33.546742   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.753450    1646 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
	I0203 12:28:33.546742   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.755438    1646 server.go:490] "Adding debug handlers to kubelet server"
	I0203 12:28:33.546742   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.757330    1646 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0203 12:28:33.546742   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.759063    1646 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I0203 12:28:33.546820   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.759618    1646 volume_manager.go:297] "Starting Kubelet Volume Manager"
	I0203 12:28:33.546820   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.760084    1646 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0203 12:28:33.546820   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.760301    1646 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-749300\" not found"
	I0203 12:28:33.546820   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.763820    1646 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0203 12:28:33.546899   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.766190    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="200ms"
	I0203 12:28:33.546899   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.775750    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:33.546983   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.775896    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:33.546983   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.776304    1646 factory.go:221] Registration of the systemd container factory successfully
	I0203 12:28:33.546983   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.776423    1646 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0203 12:28:33.547061   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.776477    1646 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0203 12:28:33.547061   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.822393    1646 cpu_manager.go:221] "Starting CPU manager" policy="none"
	I0203 12:28:33.547061   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.822414    1646 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
	I0203 12:28:33.547138   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.822433    1646 state_mem.go:36] "Initialized new in-memory state store"
	I0203 12:28:33.547138   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823729    1646 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0203 12:28:33.547138   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823782    1646 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0203 12:28:33.547138   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823807    1646 policy_none.go:49] "None policy: Start"
	I0203 12:28:33.547138   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823820    1646 memory_manager.go:186] "Starting memorymanager" policy="None"
	I0203 12:28:33.547216   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823833    1646 state_mem.go:35] "Initializing new in-memory state store"
	I0203 12:28:33.547216   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.824575    1646 state_mem.go:75] "Updated machine memory state"
	I0203 12:28:33.547216   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.827550    1646 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0203 12:28:33.547216   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.828214    1646 eviction_manager.go:189] "Eviction manager: starting control loop"
	I0203 12:28:33.547294   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.828323    1646 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0203 12:28:33.547294   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.834439    1646 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0203 12:28:33.547294   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.836223    1646 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I0203 12:28:33.547372   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.836276    1646 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-749300\" not found"
	I0203 12:28:33.547372   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.839763    1646 reconciler.go:26] "Reconciler: start to sync state"
	I0203 12:28:33.547372   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.849152    1646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0203 12:28:33.547372   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.851786    1646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0203 12:28:33.547372   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.851873    1646 status_manager.go:227] "Starting to sync pod status with apiserver"
	I0203 12:28:33.547450   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.852167    1646 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I0203 12:28:33.547450   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.852266    1646 kubelet.go:2388] "Starting kubelet main sync loop"
	I0203 12:28:33.547450   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.852425    1646 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0203 12:28:33.547528   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.857733    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:33.547606   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.857872    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:33.547606   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.865017    1646 iptables.go:577] "Could not set up iptables canary" err=<
	I0203 12:28:33.547606   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0203 12:28:33.547606   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0203 12:28:33.547684   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0203 12:28:33.547684   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0203 12:28:33.547684   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.930098    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:33.547684   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.931495    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:33.547762   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.959594    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.547762   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.959988    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ff01fa7d8c67a792cac128e6be46aba4b9713e4a6cd005178a2573c7a847c7a"
	I0203 12:28:33.547762   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965523    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1b473818438dbd2e6a91783e24fae500384dbe88b88a3ed9dd8d9c8f4724a7a"
	I0203 12:28:33.547839   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965561    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16d03cfd685dc52d880c67a5a5040dfd6dcf7d2477c368b0b221099fe19d0fc3"
	I0203 12:28:33.547839   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965576    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8d9e598659ff21f0255dbdf0fe1e487760842b470492b0b4377fb2491bf3f17"
	I0203 12:28:33.547839   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965587    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3c93fcfaa46c30cca46747853d168923992fa34e3ab48bd74f55818221180a9"
	I0203 12:28:33.547916   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.966435    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.547916   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.969099    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="400ms"
	I0203 12:28:33.547916   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.969271    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efcd217a3204d8ee4b03ebb412109a32b1b008fc65b7434e2087e8fa5429c03b"
	I0203 12:28:33.547993   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.994181    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26e5557dc32ce42e41eb095169017d71cd452b2e90ecede8972ab6dfa8c841ac"
	I0203 12:28:33.548040   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.008325    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a166f3c8776d2abb8f173e76ba48d9aa5c71b04d34638145a7d22b947e0b1e16"
	I0203 12:28:33.548077   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.024782    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb49b32ba0852c35cd9bd014b8dc9ccfc93a2c6a7d911bdd6baaba575c4e1d80"
	I0203 12:28:33.548101   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.026552    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.548129   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.027031    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.548176   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046040    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-kubeconfig\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:33.548215   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046195    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:33.548260   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046258    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a4dc8a8db691940bb17375ec22c0921e-kubeconfig\") pod \"kube-scheduler-multinode-749300\" (UID: \"a4dc8a8db691940bb17375ec22c0921e\") " pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:33.548299   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046319    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/f85eb916773a482447e41aa40aaff233-etcd-certs\") pod \"etcd-multinode-749300\" (UID: \"f85eb916773a482447e41aa40aaff233\") " pod="kube-system/etcd-multinode-749300"
	I0203 12:28:33.548344   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046369    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20275825c8d44051c01f8d920b297acd-ca-certs\") pod \"kube-apiserver-multinode-749300\" (UID: \"20275825c8d44051c01f8d920b297acd\") " pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:33.548383   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046389    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20275825c8d44051c01f8d920b297acd-k8s-certs\") pod \"kube-apiserver-multinode-749300\" (UID: \"20275825c8d44051c01f8d920b297acd\") " pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:33.548436   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046407    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20275825c8d44051c01f8d920b297acd-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-749300\" (UID: \"20275825c8d44051c01f8d920b297acd\") " pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:33.548483   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046425    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-ca-certs\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:33.548518   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046445    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/f85eb916773a482447e41aa40aaff233-etcd-data\") pod \"etcd-multinode-749300\" (UID: \"f85eb916773a482447e41aa40aaff233\") " pod="kube-system/etcd-multinode-749300"
	I0203 12:28:33.548556   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046466    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-flexvolume-dir\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:33.548629   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046483    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-k8s-certs\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:33.548663   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.134568    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:33.548663   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.136458    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.371298    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="800ms"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.537888    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.538850    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: W0203 12:27:20.642530    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.642673    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: W0203 12:27:20.718728    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.718775    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: W0203 12:27:20.727487    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.727666    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: I0203 12:27:21.096615    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2da6b5a5bd1b22ed0d0ef9ab7fd9a0874f1357443511e898b07fbae5f28d3d0"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: I0203 12:27:21.117402    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc833a943f11f228aa4ef7daceca6bf4fd4096e22ee6354cc8afb177b0dc3db5"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: E0203 12:27:21.172766    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="1.6s"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: W0203 12:27:21.239099    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: E0203 12:27:21.239402    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: I0203 12:27:21.341008    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: E0203 12:27:21.342386    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.155943    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.549226   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.168589    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.549264   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.184520    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.549384   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.192380    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: I0203 12:27:22.944384    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.220031    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.221067    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.221592    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.222217    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: E0203 12:27:24.222471    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: E0203 12:27:24.222938    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: E0203 12:27:24.223334    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: I0203 12:27:24.962104    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.072863    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-multinode-749300\" already exists" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.072916    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.096600    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-multinode-749300\" already exists" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.096649    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.100835    1646 kubelet_node_status.go:125] "Node was previously registered" node="multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.101001    1646 kubelet_node_status.go:79] "Successfully registered node" node="multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.101046    1646 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.102196    1646 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.103579    1646 setters.go:602] "Node became not ready" node="multinode-749300" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-03T12:27:25Z","lastTransitionTime":"2025-02-03T12:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.123635    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-multinode-749300\" already exists" pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.123696    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.143136    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-749300\" already exists" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.231645    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.250920    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-749300\" already exists" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:33.549946   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.733100    1646 apiserver.go:52] "Watching apiserver"
	I0203 12:28:33.549946   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.740335    1646 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-749300" podUID="b18ba461-b225-4090-8341-159171502b52"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.740880    1646 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-749300" podUID="c751851c-68ee-4c15-80ca-32642fcf2a5a"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.741767    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.743201    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.768020    1646 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.798228    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67c155d5-fb9b-42f5-8e64-865c44a5d4e6-xtables-lock\") pod \"kindnet-h6m57\" (UID: \"67c155d5-fb9b-42f5-8e64-865c44a5d4e6\") " pod="kube-system/kindnet-h6m57"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799102    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4c991afa-7bb0-4d52-bded-22d68037b5ae-tmp\") pod \"storage-provisioner\" (UID: \"4c991afa-7bb0-4d52-bded-22d68037b5ae\") " pod="kube-system/storage-provisioner"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799171    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1709b874-4fee-41f5-8d30-24912b2fa725-xtables-lock\") pod \"kube-proxy-9g92t\" (UID: \"1709b874-4fee-41f5-8d30-24912b2fa725\") " pod="kube-system/kube-proxy-9g92t"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799205    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1709b874-4fee-41f5-8d30-24912b2fa725-lib-modules\") pod \"kube-proxy-9g92t\" (UID: \"1709b874-4fee-41f5-8d30-24912b2fa725\") " pod="kube-system/kube-proxy-9g92t"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799246    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/67c155d5-fb9b-42f5-8e64-865c44a5d4e6-cni-cfg\") pod \"kindnet-h6m57\" (UID: \"67c155d5-fb9b-42f5-8e64-865c44a5d4e6\") " pod="kube-system/kindnet-h6m57"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799264    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67c155d5-fb9b-42f5-8e64-865c44a5d4e6-lib-modules\") pod \"kindnet-h6m57\" (UID: \"67c155d5-fb9b-42f5-8e64-865c44a5d4e6\") " pod="kube-system/kindnet-h6m57"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799337    1646 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799426    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.799386    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.800808    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:26.300655438 +0000 UTC m=+6.738748911 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.812299    1646 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.812369    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.843057    1646 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.862699    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.862730    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.550544   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.862793    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:26.362774296 +0000 UTC m=+6.800867869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.550577   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.898492    1646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8703dd831250f30e213efd5fca131d7" path="/var/lib/kubelet/pods/a8703dd831250f30e213efd5fca131d7/volumes"
	I0203 12:28:33.550615   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.899802    1646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cea8016677ee73c66077ce584fb15354" path="/var/lib/kubelet/pods/cea8016677ee73c66077ce584fb15354/volumes"
	I0203 12:28:33.550696   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.952875    1646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-749300" podStartSLOduration=0.952857614 podStartE2EDuration="952.857614ms" podCreationTimestamp="2025-02-03 12:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-03 12:27:25.937443526 +0000 UTC m=+6.375537099" watchObservedRunningTime="2025-02-03 12:27:25.952857614 +0000 UTC m=+6.390951187"
	I0203 12:28:33.550737   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.974229    1646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-749300" podStartSLOduration=0.974210637 podStartE2EDuration="974.210637ms" podCreationTimestamp="2025-02-03 12:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-03 12:27:25.953477018 +0000 UTC m=+6.391570591" watchObservedRunningTime="2025-02-03 12:27:25.974210637 +0000 UTC m=+6.412304110"
	I0203 12:28:33.550776   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.303818    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:33.550810   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.303893    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:27.303876335 +0000 UTC m=+7.741969908 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:33.550883   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.405407    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.550883   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.405530    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.550957   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.405596    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:27.40557752 +0000 UTC m=+7.843670993 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.550996   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.315813    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:33.551031   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.317831    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:29.317806871 +0000 UTC m=+9.755900344 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:33.551069   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.416628    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.551103   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.416661    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.551177   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.416713    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:29.41669654 +0000 UTC m=+9.854790013 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.551215   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.861806    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.551250   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.862570    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.551289   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.336385    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:33.551362   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.336563    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:33.336541991 +0000 UTC m=+13.774635464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:33.551397   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.437576    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.551428   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.437923    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.551490   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.438074    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:33.438050975 +0000 UTC m=+13.876144448 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.551520   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.853969    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.551578   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.853720    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.551636   13136 command_runner.go:130] > Feb 03 12:27:31 multinode-749300 kubelet[1646]: E0203 12:27:31.852706    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:31 multinode-749300 kubelet[1646]: E0203 12:27:31.853391    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.369187    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.369409    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:41.369390703 +0000 UTC m=+21.807484276 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.470103    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.470221    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.470291    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:41.470271952 +0000 UTC m=+21.908365425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.853533    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.854435    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:35 multinode-749300 kubelet[1646]: E0203 12:27:35.853643    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:35 multinode-749300 kubelet[1646]: E0203 12:27:35.854148    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:37 multinode-749300 kubelet[1646]: E0203 12:27:37.852924    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:37 multinode-749300 kubelet[1646]: E0203 12:27:37.853434    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:39 multinode-749300 kubelet[1646]: E0203 12:27:39.861767    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:39 multinode-749300 kubelet[1646]: E0203 12:27:39.862616    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.448061    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:33.552181   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.448222    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:57.44820293 +0000 UTC m=+37.886296403 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:33.552217   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.549425    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.552262   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.549465    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.552292   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.549520    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:57.549504632 +0000 UTC m=+37.987598205 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.552292   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.852817    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552292   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.853419    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552292   13136 command_runner.go:130] > Feb 03 12:27:43 multinode-749300 kubelet[1646]: E0203 12:27:43.853585    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552292   13136 command_runner.go:130] > Feb 03 12:27:43 multinode-749300 kubelet[1646]: E0203 12:27:43.854245    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552292   13136 command_runner.go:130] > Feb 03 12:27:45 multinode-749300 kubelet[1646]: E0203 12:27:45.853520    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552292   13136 command_runner.go:130] > Feb 03 12:27:45 multinode-749300 kubelet[1646]: E0203 12:27:45.857915    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552292   13136 command_runner.go:130] > Feb 03 12:27:47 multinode-749300 kubelet[1646]: E0203 12:27:47.853864    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552292   13136 command_runner.go:130] > Feb 03 12:27:47 multinode-749300 kubelet[1646]: E0203 12:27:47.854661    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:49 multinode-749300 kubelet[1646]: E0203 12:27:49.854481    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:49 multinode-749300 kubelet[1646]: E0203 12:27:49.855863    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:51 multinode-749300 kubelet[1646]: E0203 12:27:51.853472    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:51 multinode-749300 kubelet[1646]: E0203 12:27:51.854452    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:53 multinode-749300 kubelet[1646]: E0203 12:27:53.859668    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:53 multinode-749300 kubelet[1646]: E0203 12:27:53.860055    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:55 multinode-749300 kubelet[1646]: E0203 12:27:55.853633    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:55 multinode-749300 kubelet[1646]: E0203 12:27:55.854320    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.494848    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.494935    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:28:29.494917969 +0000 UTC m=+69.933011442 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.595875    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.595906    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.595961    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:28:29.595942441 +0000 UTC m=+70.034036014 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.853654    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.854513    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: I0203 12:27:57.906113    1646 scope.go:117] "RemoveContainer" containerID="a6484d4fc4d7f6ee26b1c4c1afc10f9bfba5b7f80f2181e9727f163daaf58ce6"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: I0203 12:27:57.907138    1646 scope.go:117] "RemoveContainer" containerID="edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.910890    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(4c991afa-7bb0-4d52-bded-22d68037b5ae)\"" pod="kube-system/storage-provisioner" podUID="4c991afa-7bb0-4d52-bded-22d68037b5ae"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:59 multinode-749300 kubelet[1646]: E0203 12:27:59.855276    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:59 multinode-749300 kubelet[1646]: E0203 12:27:59.856164    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:28:01 multinode-749300 kubelet[1646]: E0203 12:28:01.853743    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:28:01 multinode-749300 kubelet[1646]: E0203 12:28:01.854049    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:28:03 multinode-749300 kubelet[1646]: E0203 12:28:03.853330    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:28:03 multinode-749300 kubelet[1646]: E0203 12:28:03.853968    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:28:05 multinode-749300 kubelet[1646]: E0203 12:28:05.853538    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:28:05 multinode-749300 kubelet[1646]: E0203 12:28:05.854181    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:28:07 multinode-749300 kubelet[1646]: E0203 12:28:07.853789    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:28:07 multinode-749300 kubelet[1646]: E0203 12:28:07.854093    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:28:09 multinode-749300 kubelet[1646]: E0203 12:28:09.860674    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:28:09 multinode-749300 kubelet[1646]: E0203 12:28:09.861267    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:28:10 multinode-749300 kubelet[1646]: I0203 12:28:10.015143    1646 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	I0203 12:28:33.553857   13136 command_runner.go:130] > Feb 03 12:28:10 multinode-749300 kubelet[1646]: I0203 12:28:10.852780    1646 scope.go:117] "RemoveContainer" containerID="edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578"
	I0203 12:28:33.553857   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]: I0203 12:28:19.875787    1646 scope.go:117] "RemoveContainer" containerID="ebc67da1b9e9ac10747758e3a934f19f5572ae8668d2a69f7d6ee1682387d02a"
	I0203 12:28:33.553897   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]: E0203 12:28:19.883953    1646 iptables.go:577] "Could not set up iptables canary" err=<
	I0203 12:28:33.553932   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0203 12:28:33.553962   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0203 12:28:33.553962   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0203 12:28:33.553962   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0203 12:28:33.554044   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]: I0203 12:28:19.923723    1646 scope.go:117] "RemoveContainer" containerID="e3efb81aa459abda7cc19b8607aa9d2bc56a837cc325e672683ffa4a9d05876b"
	I0203 12:28:33.554044   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 kubelet[1646]: I0203 12:28:30.439871    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d290c79ddbf8dbaaae0ac6ae29ff1695c351eb244341bb86dfa66bd51e407af5"
	I0203 12:28:33.554085   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 kubelet[1646]: I0203 12:28:30.451444    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac5f0bf5197cf2f2f9c600a6d9f77ea7775ba4c80a3a3c30272ea8dc42d9f4e2"
	I0203 12:28:33.602268   13136 logs.go:123] Gathering logs for describe nodes ...
	I0203 12:28:33.602268   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0203 12:28:33.903854   13136 command_runner.go:130] > Name:               multinode-749300
	I0203 12:28:33.903854   13136 command_runner.go:130] > Roles:              control-plane
	I0203 12:28:33.903854   13136 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     kubernetes.io/hostname=multinode-749300
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     kubernetes.io/os=linux
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     minikube.k8s.io/name=multinode-749300
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_03T12_04_56_0700
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0203 12:28:33.903854   13136 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0203 12:28:33.903854   13136 command_runner.go:130] > CreationTimestamp:  Mon, 03 Feb 2025 12:04:52 +0000
	I0203 12:28:33.903854   13136 command_runner.go:130] > Taints:             <none>
	I0203 12:28:33.903854   13136 command_runner.go:130] > Unschedulable:      false
	I0203 12:28:33.903854   13136 command_runner.go:130] > Lease:
	I0203 12:28:33.903854   13136 command_runner.go:130] >   HolderIdentity:  multinode-749300
	I0203 12:28:33.903854   13136 command_runner.go:130] >   AcquireTime:     <unset>
	I0203 12:28:33.903854   13136 command_runner.go:130] >   RenewTime:       Mon, 03 Feb 2025 12:28:25 +0000
	I0203 12:28:33.903854   13136 command_runner.go:130] > Conditions:
	I0203 12:28:33.903854   13136 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0203 12:28:33.903854   13136 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0203 12:28:33.903854   13136 command_runner.go:130] >   MemoryPressure   False   Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0203 12:28:33.903854   13136 command_runner.go:130] >   DiskPressure     False   Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0203 12:28:33.903854   13136 command_runner.go:130] >   PIDPressure      False   Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0203 12:28:33.903854   13136 command_runner.go:130] >   Ready            True    Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:28:10 +0000   KubeletReady                 kubelet is posting ready status
	I0203 12:28:33.903854   13136 command_runner.go:130] > Addresses:
	I0203 12:28:33.903854   13136 command_runner.go:130] >   InternalIP:  172.25.12.244
	I0203 12:28:33.903854   13136 command_runner.go:130] >   Hostname:    multinode-749300
	I0203 12:28:33.903854   13136 command_runner.go:130] > Capacity:
	I0203 12:28:33.903854   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:33.903854   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:33.904844   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:33.904844   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:33.904844   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:33.904844   13136 command_runner.go:130] > Allocatable:
	I0203 12:28:33.904844   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:33.904844   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:33.904844   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:33.904844   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:33.904844   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:33.904844   13136 command_runner.go:130] > System Info:
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Machine ID:                 aa9fbed762e844a2902d570b7040a1f0
	I0203 12:28:33.904844   13136 command_runner.go:130] >   System UUID:                69ffc0f0-a1d7-9e4e-97f3-ed54041f4203
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Boot ID:                    d8bb3b39-ca1e-4113-9882-57d63502f9b2
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Kernel Version:             5.10.207
	I0203 12:28:33.904844   13136 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Operating System:           linux
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Architecture:               amd64
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0203 12:28:33.904844   13136 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0203 12:28:33.904844   13136 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0203 12:28:33.904844   13136 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0203 12:28:33.904844   13136 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0203 12:28:33.904844   13136 command_runner.go:130] >   default                     busybox-58667487b6-zgvmd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0203 12:28:33.904844   13136 command_runner.go:130] >   kube-system                 coredns-668d6bf9bc-v2gkp                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0203 12:28:33.904844   13136 command_runner.go:130] >   kube-system                 etcd-multinode-749300                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         68s
	I0203 12:28:33.904844   13136 command_runner.go:130] >   kube-system                 kindnet-h6m57                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0203 12:28:33.904844   13136 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-749300             250m (12%)    0 (0%)      0 (0%)           0 (0%)         68s
	I0203 12:28:33.904844   13136 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-749300    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:33.904844   13136 command_runner.go:130] >   kube-system                 kube-proxy-9g92t                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:33.904844   13136 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-749300             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:33.904844   13136 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:33.904844   13136 command_runner.go:130] > Allocated resources:
	I0203 12:28:33.904844   13136 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Resource           Requests     Limits
	I0203 12:28:33.904844   13136 command_runner.go:130] >   --------           --------     ------
	I0203 12:28:33.904844   13136 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0203 12:28:33.904844   13136 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0203 12:28:33.904844   13136 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0203 12:28:33.904844   13136 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0203 12:28:33.904844   13136 command_runner.go:130] > Events:
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Type     Reason                   Age                From             Message
	I0203 12:28:33.904844   13136 command_runner.go:130] >   ----     ------                   ----               ----             -------
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Normal   Starting                 23m                kube-proxy       
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Normal   Starting                 65s                kube-proxy       
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Normal   Starting                 23m                kubelet          Starting kubelet.
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Normal   NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Normal   NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Normal   NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    23m                kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Normal   NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Normal   NodeHasSufficientMemory  23m                kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Normal   NodeHasSufficientPID     23m                kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Normal   Starting                 23m                kubelet          Starting kubelet.
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Normal   RegisteredNode           23m                node-controller  Node multinode-749300 event: Registered Node multinode-749300 in Controller
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Normal   NodeReady                23m                kubelet          Node multinode-749300 status is now: NodeReady
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Normal   Starting                 74s                kubelet          Starting kubelet.
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Normal   NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Normal   NodeHasSufficientPID     74s (x7 over 74s)  kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Normal   NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Warning  Rebooted                 68s                kubelet          Node multinode-749300 has been rebooted, boot id: d8bb3b39-ca1e-4113-9882-57d63502f9b2
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Normal   RegisteredNode           65s                node-controller  Node multinode-749300 event: Registered Node multinode-749300 in Controller
	I0203 12:28:33.905846   13136 command_runner.go:130] > Name:               multinode-749300-m02
	I0203 12:28:33.905846   13136 command_runner.go:130] > Roles:              <none>
	I0203 12:28:33.905846   13136 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     kubernetes.io/hostname=multinode-749300-m02
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     kubernetes.io/os=linux
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     minikube.k8s.io/name=multinode-749300
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_03T12_07_57_0700
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0203 12:28:33.905846   13136 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0203 12:28:33.905846   13136 command_runner.go:130] > CreationTimestamp:  Mon, 03 Feb 2025 12:07:57 +0000
	I0203 12:28:33.905846   13136 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0203 12:28:33.905846   13136 command_runner.go:130] > Unschedulable:      false
	I0203 12:28:33.905846   13136 command_runner.go:130] > Lease:
	I0203 12:28:33.905846   13136 command_runner.go:130] >   HolderIdentity:  multinode-749300-m02
	I0203 12:28:33.905846   13136 command_runner.go:130] >   AcquireTime:     <unset>
	I0203 12:28:33.905846   13136 command_runner.go:130] >   RenewTime:       Mon, 03 Feb 2025 12:24:25 +0000
	I0203 12:28:33.905846   13136 command_runner.go:130] > Conditions:
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0203 12:28:33.905846   13136 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0203 12:28:33.905846   13136 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:33.905846   13136 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:33.905846   13136 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Ready            Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:33.905846   13136 command_runner.go:130] > Addresses:
	I0203 12:28:33.905846   13136 command_runner.go:130] >   InternalIP:  172.25.8.35
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Hostname:    multinode-749300-m02
	I0203 12:28:33.905846   13136 command_runner.go:130] > Capacity:
	I0203 12:28:33.905846   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:33.905846   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:33.905846   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:33.905846   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:33.905846   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:33.905846   13136 command_runner.go:130] > Allocatable:
	I0203 12:28:33.905846   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:33.905846   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:33.905846   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:33.905846   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:33.905846   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:33.905846   13136 command_runner.go:130] > System Info:
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Machine ID:                 90c62936ba5d4d0aaeb17fe1abbb7ffd
	I0203 12:28:33.905846   13136 command_runner.go:130] >   System UUID:                4e05b2a5-08ff-3741-b04f-b8bc068a3e3b
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Boot ID:                    4aec9dc0-92f8-4c4d-b16a-206948ca045d
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Kernel Version:             5.10.207
	I0203 12:28:33.905846   13136 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Operating System:           linux
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Architecture:               amd64
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0203 12:28:33.906861   13136 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0203 12:28:33.906861   13136 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0203 12:28:33.906861   13136 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0203 12:28:33.906861   13136 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0203 12:28:33.906861   13136 command_runner.go:130] >   default                     busybox-58667487b6-c66bf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0203 12:28:33.906861   13136 command_runner.go:130] >   kube-system                 kindnet-dc9wq               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0203 12:28:33.906861   13136 command_runner.go:130] >   kube-system                 kube-proxy-ggnq7            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0203 12:28:33.906861   13136 command_runner.go:130] > Allocated resources:
	I0203 12:28:33.906861   13136 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Resource           Requests   Limits
	I0203 12:28:33.906861   13136 command_runner.go:130] >   --------           --------   ------
	I0203 12:28:33.906861   13136 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0203 12:28:33.906861   13136 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0203 12:28:33.906861   13136 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0203 12:28:33.906861   13136 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0203 12:28:33.906861   13136 command_runner.go:130] > Events:
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0203 12:28:33.906861   13136 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-749300-m02 status is now: NodeHasSufficientMemory
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-749300-m02 status is now: NodeHasNoDiskPressure
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-749300-m02 status is now: NodeHasSufficientPID
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-749300-m02 event: Registered Node multinode-749300-m02 in Controller
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-749300-m02 status is now: NodeReady
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Normal  RegisteredNode           65s                node-controller  Node multinode-749300-m02 event: Registered Node multinode-749300-m02 in Controller
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Normal  NodeNotReady             15s                node-controller  Node multinode-749300-m02 status is now: NodeNotReady
	I0203 12:28:33.906861   13136 command_runner.go:130] > Name:               multinode-749300-m03
	I0203 12:28:33.906861   13136 command_runner.go:130] > Roles:              <none>
	I0203 12:28:33.906861   13136 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     kubernetes.io/hostname=multinode-749300-m03
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     kubernetes.io/os=linux
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     minikube.k8s.io/name=multinode-749300
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_03T12_22_58_0700
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0203 12:28:33.906861   13136 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0203 12:28:33.906861   13136 command_runner.go:130] > CreationTimestamp:  Mon, 03 Feb 2025 12:22:58 +0000
	I0203 12:28:33.906861   13136 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0203 12:28:33.906861   13136 command_runner.go:130] > Unschedulable:      false
	I0203 12:28:33.906861   13136 command_runner.go:130] > Lease:
	I0203 12:28:33.906861   13136 command_runner.go:130] >   HolderIdentity:  multinode-749300-m03
	I0203 12:28:33.906861   13136 command_runner.go:130] >   AcquireTime:     <unset>
	I0203 12:28:33.906861   13136 command_runner.go:130] >   RenewTime:       Mon, 03 Feb 2025 12:23:59 +0000
	I0203 12:28:33.906861   13136 command_runner.go:130] > Conditions:
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0203 12:28:33.906861   13136 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0203 12:28:33.906861   13136 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:33.906861   13136 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:33.906861   13136 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Ready            Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:33.906861   13136 command_runner.go:130] > Addresses:
	I0203 12:28:33.906861   13136 command_runner.go:130] >   InternalIP:  172.25.0.54
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Hostname:    multinode-749300-m03
	I0203 12:28:33.906861   13136 command_runner.go:130] > Capacity:
	I0203 12:28:33.906861   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:33.906861   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:33.906861   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:33.906861   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:33.906861   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:33.906861   13136 command_runner.go:130] > Allocatable:
	I0203 12:28:33.906861   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:33.906861   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:33.906861   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:33.906861   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:33.907843   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:33.907843   13136 command_runner.go:130] > System Info:
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Machine ID:                 38d40ad4379a4ec5b47dd7ccdbdcfdd3
	I0203 12:28:33.907843   13136 command_runner.go:130] >   System UUID:                605d710b-5b92-ec4e-8d85-0f6c10e8d37a
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Boot ID:                    13f88b1f-ea06-4747-bc4f-774ad0edb09f
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Kernel Version:             5.10.207
	I0203 12:28:33.907843   13136 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Operating System:           linux
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Architecture:               amd64
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0203 12:28:33.907843   13136 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0203 12:28:33.907843   13136 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0203 12:28:33.907843   13136 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0203 12:28:33.907843   13136 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0203 12:28:33.907843   13136 command_runner.go:130] >   kube-system                 kindnet-bckxx       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0203 12:28:33.907843   13136 command_runner.go:130] >   kube-system                 kube-proxy-w8wrd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0203 12:28:33.907843   13136 command_runner.go:130] > Allocated resources:
	I0203 12:28:33.907843   13136 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Resource           Requests   Limits
	I0203 12:28:33.907843   13136 command_runner.go:130] >   --------           --------   ------
	I0203 12:28:33.907843   13136 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0203 12:28:33.907843   13136 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0203 12:28:33.907843   13136 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0203 12:28:33.907843   13136 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0203 12:28:33.907843   13136 command_runner.go:130] > Events:
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0203 12:28:33.907843   13136 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  Starting                 5m32s                  kube-proxy       
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientMemory
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientPID
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-749300-m03 status is now: NodeHasNoDiskPressure
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-749300-m03 status is now: NodeReady
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  CIDRAssignmentFailed     5m35s                  cidrAllocator    Node multinode-749300-m03 status is now: CIDRAssignmentFailed
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m35s (x2 over 5m35s)  kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientMemory
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m35s (x2 over 5m35s)  kubelet          Node multinode-749300-m03 status is now: NodeHasNoDiskPressure
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m35s (x2 over 5m35s)  kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientPID
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m35s                  kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  RegisteredNode           5m34s                  node-controller  Node multinode-749300-m03 event: Registered Node multinode-749300-m03 in Controller
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  NodeReady                5m20s                  kubelet          Node multinode-749300-m03 status is now: NodeReady
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  NodeNotReady             3m43s                  node-controller  Node multinode-749300-m03 status is now: NodeNotReady
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  RegisteredNode           65s                    node-controller  Node multinode-749300-m03 event: Registered Node multinode-749300-m03 in Controller
	I0203 12:28:33.919215   13136 logs.go:123] Gathering logs for kube-proxy [cf33452e7244] ...
	I0203 12:28:33.919215   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf33452e7244"
	I0203 12:28:33.949137   13136 command_runner.go:130] ! I0203 12:27:27.874759       1 server_linux.go:66] "Using iptables proxy"
	I0203 12:28:33.949250   13136 command_runner.go:130] ! E0203 12:27:28.000541       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:33.949250   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0203 12:28:33.949250   13136 command_runner.go:130] ! 	add table ip kube-proxy
	I0203 12:28:33.949250   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:33.949250   13136 command_runner.go:130] !  >
	I0203 12:28:33.949250   13136 command_runner.go:130] ! E0203 12:27:28.027381       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:33.949250   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0203 12:28:33.949250   13136 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0203 12:28:33.949353   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:33.949353   13136 command_runner.go:130] !  >
	I0203 12:28:33.949353   13136 command_runner.go:130] ! I0203 12:27:28.187333       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.12.244"]
	I0203 12:28:33.949353   13136 command_runner.go:130] ! E0203 12:27:28.189467       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0203 12:28:33.949353   13136 command_runner.go:130] ! I0203 12:27:28.571807       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0203 12:28:33.949442   13136 command_runner.go:130] ! I0203 12:27:28.573724       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0203 12:28:33.949473   13136 command_runner.go:130] ! I0203 12:27:28.574028       1 server_linux.go:170] "Using iptables Proxier"
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.580953       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.586727       1 server.go:497] "Version info" version="v1.32.1"
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.590708       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.619546       1 config.go:199] "Starting service config controller"
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.621538       1 config.go:105] "Starting endpoint slice config controller"
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.621733       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.623181       1 config.go:329] "Starting node config controller"
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.623915       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.626746       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.627120       1 shared_informer.go:320] Caches are synced for service config
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.722206       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.724853       1 shared_informer.go:320] Caches are synced for node config
	I0203 12:28:33.951678   13136 logs.go:123] Gathering logs for kube-controller-manager [fa5ab1df8985] ...
	I0203 12:28:33.951678   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa5ab1df8985"
	I0203 12:28:33.982714   13136 command_runner.go:130] ! I0203 12:27:22.909691       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:33.982714   13136 command_runner.go:130] ! I0203 12:27:23.402652       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0203 12:28:33.982772   13136 command_runner.go:130] ! I0203 12:27:23.402986       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:33.982772   13136 command_runner.go:130] ! I0203 12:27:23.406564       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:33.982772   13136 command_runner.go:130] ! I0203 12:27:23.406976       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:33.982772   13136 command_runner.go:130] ! I0203 12:27:23.407714       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0203 12:28:33.982772   13136 command_runner.go:130] ! I0203 12:27:23.407940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:33.982772   13136 command_runner.go:130] ! I0203 12:27:26.898379       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0203 12:28:33.982772   13136 command_runner.go:130] ! I0203 12:27:26.903089       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0203 12:28:33.982948   13136 command_runner.go:130] ! I0203 12:27:26.920491       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0203 12:28:33.982948   13136 command_runner.go:130] ! I0203 12:27:26.921386       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0203 12:28:33.982948   13136 command_runner.go:130] ! I0203 12:27:26.921411       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0203 12:28:33.983011   13136 command_runner.go:130] ! I0203 12:27:26.927675       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0203 12:28:33.983060   13136 command_runner.go:130] ! I0203 12:27:26.928004       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0203 12:28:33.983060   13136 command_runner.go:130] ! I0203 12:27:26.928034       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0203 12:28:33.983100   13136 command_runner.go:130] ! I0203 12:27:26.930586       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0203 12:28:33.983100   13136 command_runner.go:130] ! I0203 12:27:26.930784       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0203 12:28:33.983100   13136 command_runner.go:130] ! I0203 12:27:26.930813       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0203 12:28:33.983100   13136 command_runner.go:130] ! I0203 12:27:26.933480       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0203 12:28:33.983160   13136 command_runner.go:130] ! I0203 12:27:26.933510       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0203 12:28:33.983160   13136 command_runner.go:130] ! I0203 12:27:26.933688       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0203 12:28:33.983160   13136 command_runner.go:130] ! I0203 12:27:26.937614       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0203 12:28:33.983221   13136 command_runner.go:130] ! I0203 12:27:26.937802       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0203 12:28:33.983221   13136 command_runner.go:130] ! I0203 12:27:26.937815       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0203 12:28:33.983552   13136 command_runner.go:130] ! I0203 12:27:26.941806       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0203 12:28:33.985136   13136 command_runner.go:130] ! I0203 12:27:26.942027       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0203 12:28:33.985193   13136 command_runner.go:130] ! I0203 12:27:26.942037       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0203 12:28:33.985236   13136 command_runner.go:130] ! W0203 12:27:26.985553       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0203 12:28:33.985236   13136 command_runner.go:130] ! I0203 12:27:27.000401       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0203 12:28:33.985236   13136 command_runner.go:130] ! I0203 12:27:27.000471       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0203 12:28:33.985236   13136 command_runner.go:130] ! I0203 12:27:27.002441       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0203 12:28:33.985236   13136 command_runner.go:130] ! I0203 12:27:27.002463       1 shared_informer.go:313] Waiting for caches to sync for node
	I0203 12:28:33.985236   13136 command_runner.go:130] ! I0203 12:27:27.005161       1 shared_informer.go:320] Caches are synced for tokens
	I0203 12:28:33.985236   13136 command_runner.go:130] ! I0203 12:27:27.005494       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0203 12:28:33.985335   13136 command_runner.go:130] ! I0203 12:27:27.005531       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0203 12:28:33.985335   13136 command_runner.go:130] ! I0203 12:27:27.006525       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0203 12:28:33.985335   13136 command_runner.go:130] ! I0203 12:27:27.006554       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0203 12:28:33.985335   13136 command_runner.go:130] ! I0203 12:27:27.006561       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0203 12:28:33.985335   13136 command_runner.go:130] ! I0203 12:27:27.018211       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0203 12:28:33.985335   13136 command_runner.go:130] ! I0203 12:27:27.020298       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:33.985447   13136 command_runner.go:130] ! I0203 12:27:27.020315       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0203 12:28:33.985447   13136 command_runner.go:130] ! I0203 12:27:27.020476       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:33.985447   13136 command_runner.go:130] ! I0203 12:27:27.020496       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0203 12:28:33.985447   13136 command_runner.go:130] ! I0203 12:27:27.020523       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0203 12:28:33.985447   13136 command_runner.go:130] ! I0203 12:27:27.020531       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0203 12:28:33.985566   13136 command_runner.go:130] ! I0203 12:27:27.035455       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0203 12:28:33.985566   13136 command_runner.go:130] ! I0203 12:27:27.035474       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0203 12:28:33.985634   13136 command_runner.go:130] ! I0203 12:27:27.036405       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0203 12:28:33.985634   13136 command_runner.go:130] ! I0203 12:27:27.036423       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0203 12:28:33.985676   13136 command_runner.go:130] ! I0203 12:27:27.036035       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0203 12:28:33.985676   13136 command_runner.go:130] ! I0203 12:27:27.044089       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0203 12:28:33.985676   13136 command_runner.go:130] ! I0203 12:27:27.044099       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0203 12:28:33.985676   13136 command_runner.go:130] ! I0203 12:27:27.055692       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0203 12:28:33.986203   13136 command_runner.go:130] ! I0203 12:27:27.056054       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0203 12:28:33.986325   13136 command_runner.go:130] ! I0203 12:27:27.056069       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0203 12:28:33.986325   13136 command_runner.go:130] ! I0203 12:27:27.078626       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0203 12:28:33.986325   13136 command_runner.go:130] ! I0203 12:27:27.078816       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0203 12:28:33.986325   13136 command_runner.go:130] ! I0203 12:27:27.078939       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0203 12:28:33.986325   13136 command_runner.go:130] ! I0203 12:27:27.078953       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0203 12:28:33.986452   13136 command_runner.go:130] ! I0203 12:27:27.092379       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0203 12:28:33.986452   13136 command_runner.go:130] ! I0203 12:27:27.092403       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0203 12:28:33.986452   13136 command_runner.go:130] ! I0203 12:27:27.092472       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:33.986565   13136 command_runner.go:130] ! I0203 12:27:27.093806       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0203 12:28:33.986565   13136 command_runner.go:130] ! I0203 12:27:27.094076       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0203 12:28:33.986565   13136 command_runner.go:130] ! I0203 12:27:27.094201       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:33.986565   13136 command_runner.go:130] ! I0203 12:27:27.094716       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0203 12:28:33.986565   13136 command_runner.go:130] ! I0203 12:27:27.095015       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:33.986679   13136 command_runner.go:130] ! I0203 12:27:27.095085       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:33.986679   13136 command_runner.go:130] ! I0203 12:27:27.095525       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0203 12:28:33.986679   13136 command_runner.go:130] ! I0203 12:27:27.095975       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0203 12:28:33.986679   13136 command_runner.go:130] ! I0203 12:27:27.095995       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0203 12:28:33.986679   13136 command_runner.go:130] ! I0203 12:27:27.096141       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:33.987114   13136 command_runner.go:130] ! I0203 12:27:27.105052       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0203 12:28:33.987171   13136 command_runner.go:130] ! I0203 12:27:27.108021       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0203 12:28:33.987171   13136 command_runner.go:130] ! I0203 12:27:27.108044       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0203 12:28:33.987171   13136 command_runner.go:130] ! I0203 12:27:27.108849       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0203 12:28:33.987238   13136 command_runner.go:130] ! I0203 12:27:27.111028       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0203 12:28:33.987238   13136 command_runner.go:130] ! I0203 12:27:27.111046       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0203 12:28:33.987271   13136 command_runner.go:130] ! I0203 12:27:27.178113       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0203 12:28:33.987295   13136 command_runner.go:130] ! I0203 12:27:27.178273       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0203 12:28:33.987295   13136 command_runner.go:130] ! I0203 12:27:27.181884       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0203 12:28:33.987295   13136 command_runner.go:130] ! I0203 12:27:27.182308       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0203 12:28:33.987295   13136 command_runner.go:130] ! I0203 12:27:27.182384       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0203 12:28:33.987295   13136 command_runner.go:130] ! I0203 12:27:27.182422       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0203 12:28:33.987295   13136 command_runner.go:130] ! I0203 12:27:27.220586       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0203 12:28:33.987397   13136 command_runner.go:130] ! I0203 12:27:27.220908       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0203 12:28:33.987397   13136 command_runner.go:130] ! I0203 12:27:27.221122       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0203 12:28:33.987397   13136 command_runner.go:130] ! I0203 12:27:27.254107       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0203 12:28:33.987397   13136 command_runner.go:130] ! I0203 12:27:27.259526       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0203 12:28:33.987397   13136 command_runner.go:130] ! I0203 12:27:27.259566       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0203 12:28:33.987397   13136 command_runner.go:130] ! I0203 12:27:27.259616       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0203 12:28:33.987509   13136 command_runner.go:130] ! I0203 12:27:27.259642       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0203 12:28:33.987532   13136 command_runner.go:130] ! W0203 12:27:27.259665       1 shared_informer.go:597] resyncPeriod 16h18m36.581327018s is smaller than resyncCheckPeriod 16h18m48.925429448s and the informer has already started. Changing it to 16h18m48.925429448s
	I0203 12:28:33.987532   13136 command_runner.go:130] ! I0203 12:27:27.259798       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0203 12:28:33.987532   13136 command_runner.go:130] ! I0203 12:27:27.259831       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0203 12:28:33.987532   13136 command_runner.go:130] ! I0203 12:27:27.259851       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0203 12:28:33.987639   13136 command_runner.go:130] ! I0203 12:27:27.259880       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0203 12:28:33.987639   13136 command_runner.go:130] ! I0203 12:27:27.259900       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0203 12:28:33.987639   13136 command_runner.go:130] ! I0203 12:27:27.259918       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0203 12:28:33.987639   13136 command_runner.go:130] ! I0203 12:27:27.259931       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0203 12:28:33.987639   13136 command_runner.go:130] ! I0203 12:27:27.259951       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0203 12:28:33.987748   13136 command_runner.go:130] ! I0203 12:27:27.259973       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0203 12:28:33.987748   13136 command_runner.go:130] ! I0203 12:27:27.259996       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0203 12:28:33.987748   13136 command_runner.go:130] ! I0203 12:27:27.260019       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0203 12:28:33.987748   13136 command_runner.go:130] ! I0203 12:27:27.260033       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0203 12:28:33.987864   13136 command_runner.go:130] ! W0203 12:27:27.260043       1 shared_informer.go:597] resyncPeriod 12h21m15.604254037s is smaller than resyncCheckPeriod 16h18m48.925429448s and the informer has already started. Changing it to 16h18m48.925429448s
	I0203 12:28:33.987864   13136 command_runner.go:130] ! I0203 12:27:27.260097       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0203 12:28:33.987864   13136 command_runner.go:130] ! I0203 12:27:27.260171       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0203 12:28:33.987864   13136 command_runner.go:130] ! I0203 12:27:27.260229       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0203 12:28:33.987864   13136 command_runner.go:130] ! I0203 12:27:27.260265       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0203 12:28:33.987864   13136 command_runner.go:130] ! I0203 12:27:27.260486       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0203 12:28:33.987864   13136 command_runner.go:130] ! I0203 12:27:27.260501       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:33.987984   13136 command_runner.go:130] ! I0203 12:27:27.260524       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0203 12:28:33.987984   13136 command_runner.go:130] ! I0203 12:27:27.267963       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0203 12:28:33.987984   13136 command_runner.go:130] ! I0203 12:27:27.267980       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0203 12:28:33.987984   13136 command_runner.go:130] ! I0203 12:27:27.268261       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0203 12:28:33.987984   13136 command_runner.go:130] ! I0203 12:27:27.268271       1 shared_informer.go:313] Waiting for caches to sync for job
	I0203 12:28:33.987984   13136 command_runner.go:130] ! I0203 12:27:27.275304       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0203 12:28:33.987984   13136 command_runner.go:130] ! I0203 12:27:27.275791       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0203 12:28:33.988097   13136 command_runner.go:130] ! I0203 12:27:27.275805       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0203 12:28:33.988097   13136 command_runner.go:130] ! I0203 12:27:27.282846       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0203 12:28:33.988097   13136 command_runner.go:130] ! I0203 12:27:27.285688       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0203 12:28:33.988097   13136 command_runner.go:130] ! I0203 12:27:27.285931       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0203 12:28:33.988097   13136 command_runner.go:130] ! I0203 12:27:27.285943       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0203 12:28:33.988202   13136 command_runner.go:130] ! I0203 12:27:27.285971       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0203 12:28:33.988202   13136 command_runner.go:130] ! I0203 12:27:27.285981       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0203 12:28:33.988202   13136 command_runner.go:130] ! I0203 12:27:27.294816       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0203 12:28:33.988202   13136 command_runner.go:130] ! I0203 12:27:27.294925       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0203 12:28:33.988202   13136 command_runner.go:130] ! I0203 12:27:27.294936       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0203 12:28:33.988202   13136 command_runner.go:130] ! I0203 12:27:27.318951       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0203 12:28:33.988202   13136 command_runner.go:130] ! I0203 12:27:27.319030       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0203 12:28:33.988315   13136 command_runner.go:130] ! I0203 12:27:27.319040       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0203 12:28:33.988315   13136 command_runner.go:130] ! I0203 12:27:27.355026       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0203 12:28:33.988315   13136 command_runner.go:130] ! I0203 12:27:27.355145       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0203 12:28:33.988315   13136 command_runner.go:130] ! I0203 12:27:27.355157       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0203 12:28:33.988315   13136 command_runner.go:130] ! I0203 12:27:27.502334       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0203 12:28:33.988315   13136 command_runner.go:130] ! I0203 12:27:27.502612       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:33.988315   13136 command_runner.go:130] ! I0203 12:27:27.503231       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0203 12:28:33.988315   13136 command_runner.go:130] ! I0203 12:27:27.503509       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0203 12:28:33.988427   13136 command_runner.go:130] ! I0203 12:27:27.601804       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0203 12:28:33.988427   13136 command_runner.go:130] ! I0203 12:27:27.601861       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0203 12:28:33.988427   13136 command_runner.go:130] ! I0203 12:27:27.702241       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0203 12:28:33.988427   13136 command_runner.go:130] ! I0203 12:27:27.702332       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0203 12:28:33.988427   13136 command_runner.go:130] ! I0203 12:27:27.702378       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0203 12:28:33.988427   13136 command_runner.go:130] ! I0203 12:27:27.702389       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0203 12:28:33.988537   13136 command_runner.go:130] ! I0203 12:27:27.752020       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0203 12:28:33.988537   13136 command_runner.go:130] ! I0203 12:27:27.752619       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0203 12:28:33.988537   13136 command_runner.go:130] ! I0203 12:27:27.752706       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0203 12:28:33.988537   13136 command_runner.go:130] ! I0203 12:27:27.803085       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0203 12:28:33.988537   13136 command_runner.go:130] ! I0203 12:27:27.803455       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0203 12:28:33.988537   13136 command_runner.go:130] ! I0203 12:27:27.803481       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0203 12:28:33.988537   13136 command_runner.go:130] ! I0203 12:27:27.855074       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0203 12:28:33.988650   13136 command_runner.go:130] ! I0203 12:27:27.855248       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0203 12:28:33.988650   13136 command_runner.go:130] ! I0203 12:27:27.855184       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0203 12:28:33.988650   13136 command_runner.go:130] ! I0203 12:27:27.855399       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0203 12:28:33.988650   13136 command_runner.go:130] ! I0203 12:27:27.906335       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0203 12:28:33.988650   13136 command_runner.go:130] ! I0203 12:27:27.906694       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0203 12:28:33.988650   13136 command_runner.go:130] ! I0203 12:27:27.906991       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0203 12:28:33.988650   13136 command_runner.go:130] ! I0203 12:27:27.907151       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0203 12:28:33.988765   13136 command_runner.go:130] ! I0203 12:27:27.952285       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0203 12:28:33.988765   13136 command_runner.go:130] ! I0203 12:27:27.952811       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0203 12:28:33.988765   13136 command_runner.go:130] ! I0203 12:27:27.953099       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0203 12:28:33.988765   13136 command_runner.go:130] ! I0203 12:27:28.007756       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0203 12:28:33.988765   13136 command_runner.go:130] ! I0203 12:27:28.008110       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0203 12:28:33.988765   13136 command_runner.go:130] ! I0203 12:27:28.008081       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0203 12:28:33.988765   13136 command_runner.go:130] ! I0203 12:27:28.008316       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0203 12:28:33.988870   13136 command_runner.go:130] ! I0203 12:27:28.056312       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0203 12:28:33.988870   13136 command_runner.go:130] ! I0203 12:27:28.059984       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0203 12:28:33.988870   13136 command_runner.go:130] ! I0203 12:27:28.060009       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0203 12:28:33.988870   13136 command_runner.go:130] ! I0203 12:27:28.076985       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:33.988870   13136 command_runner.go:130] ! I0203 12:27:28.123054       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300\" does not exist"
	I0203 12:28:33.988870   13136 command_runner.go:130] ! I0203 12:27:28.125466       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m02\" does not exist"
	I0203 12:28:33.988981   13136 command_runner.go:130] ! I0203 12:27:28.127487       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m03\" does not exist"
	I0203 12:28:33.988981   13136 command_runner.go:130] ! I0203 12:27:28.128305       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0203 12:28:33.988981   13136 command_runner.go:130] ! I0203 12:27:28.130715       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:33.989798   13136 command_runner.go:130] ! I0203 12:27:28.131611       1 shared_informer.go:320] Caches are synced for cronjob
	I0203 12:28:33.989864   13136 command_runner.go:130] ! I0203 12:27:28.137580       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0203 12:28:33.989864   13136 command_runner.go:130] ! I0203 12:27:28.142883       1 shared_informer.go:320] Caches are synced for TTL
	I0203 12:28:33.989909   13136 command_runner.go:130] ! I0203 12:27:28.155436       1 shared_informer.go:320] Caches are synced for daemon sets
	I0203 12:28:33.989909   13136 command_runner.go:130] ! I0203 12:27:28.169742       1 shared_informer.go:320] Caches are synced for crt configmap
	I0203 12:28:33.989909   13136 command_runner.go:130] ! I0203 12:27:28.178458       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0203 12:28:33.989947   13136 command_runner.go:130] ! I0203 12:27:28.179559       1 shared_informer.go:320] Caches are synced for job
	I0203 12:28:33.989947   13136 command_runner.go:130] ! I0203 12:27:28.184280       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0203 12:28:33.989947   13136 command_runner.go:130] ! I0203 12:27:28.184866       1 shared_informer.go:320] Caches are synced for endpoint
	I0203 12:28:33.990005   13136 command_runner.go:130] ! I0203 12:27:28.185203       1 shared_informer.go:320] Caches are synced for persistent volume
	I0203 12:28:33.990005   13136 command_runner.go:130] ! I0203 12:27:28.188183       1 shared_informer.go:320] Caches are synced for disruption
	I0203 12:28:33.990005   13136 command_runner.go:130] ! I0203 12:27:28.191185       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0203 12:28:33.990005   13136 command_runner.go:130] ! I0203 12:27:28.192463       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0203 12:28:33.990061   13136 command_runner.go:130] ! I0203 12:27:28.192932       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0203 12:28:33.990061   13136 command_runner.go:130] ! I0203 12:27:28.195813       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:33.990104   13136 command_runner.go:130] ! I0203 12:27:28.197022       1 shared_informer.go:320] Caches are synced for expand
	I0203 12:28:33.990104   13136 command_runner.go:130] ! I0203 12:27:28.197371       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0203 12:28:33.990104   13136 command_runner.go:130] ! I0203 12:27:28.203607       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0203 12:28:33.990104   13136 command_runner.go:130] ! I0203 12:27:28.205940       1 shared_informer.go:320] Caches are synced for node
	I0203 12:28:33.990104   13136 command_runner.go:130] ! I0203 12:27:28.206428       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0203 12:28:33.990104   13136 command_runner.go:130] ! I0203 12:27:28.206719       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0203 12:28:33.990104   13136 command_runner.go:130] ! I0203 12:27:28.206743       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0203 12:28:33.990202   13136 command_runner.go:130] ! I0203 12:27:28.206759       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0203 12:28:33.990202   13136 command_runner.go:130] ! I0203 12:27:28.207125       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.990202   13136 command_runner.go:130] ! I0203 12:27:28.207167       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.990258   13136 command_runner.go:130] ! I0203 12:27:28.207249       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.990258   13136 command_runner.go:130] ! I0203 12:27:28.207497       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0203 12:28:33.990258   13136 command_runner.go:130] ! I0203 12:27:28.212287       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0203 12:28:33.990258   13136 command_runner.go:130] ! I0203 12:27:28.212651       1 shared_informer.go:320] Caches are synced for taint
	I0203 12:28:33.990319   13136 command_runner.go:130] ! I0203 12:27:28.216545       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0203 12:28:33.990319   13136 command_runner.go:130] ! I0203 12:27:28.213230       1 shared_informer.go:320] Caches are synced for GC
	I0203 12:28:33.990319   13136 command_runner.go:130] ! I0203 12:27:28.220697       1 shared_informer.go:320] Caches are synced for PV protection
	I0203 12:28:33.990375   13136 command_runner.go:130] ! I0203 12:27:28.221685       1 shared_informer.go:320] Caches are synced for namespace
	I0203 12:28:33.990375   13136 command_runner.go:130] ! I0203 12:27:28.223956       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0203 12:28:33.990375   13136 command_runner.go:130] ! I0203 12:27:28.214977       1 shared_informer.go:320] Caches are synced for ephemeral
	I0203 12:28:33.990375   13136 command_runner.go:130] ! I0203 12:27:28.215855       1 shared_informer.go:320] Caches are synced for attach detach
	I0203 12:28:33.990375   13136 command_runner.go:130] ! I0203 12:27:28.229339       1 shared_informer.go:320] Caches are synced for deployment
	I0203 12:28:33.990436   13136 command_runner.go:130] ! I0203 12:27:28.231152       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:33.990436   13136 command_runner.go:130] ! I0203 12:27:28.240053       1 shared_informer.go:320] Caches are synced for stateful set
	I0203 12:28:33.990436   13136 command_runner.go:130] ! I0203 12:27:28.244571       1 shared_informer.go:320] Caches are synced for HPA
	I0203 12:28:33.990491   13136 command_runner.go:130] ! I0203 12:27:28.253632       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0203 12:28:33.990491   13136 command_runner.go:130] ! I0203 12:27:28.253905       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:33.990530   13136 command_runner.go:130] ! I0203 12:27:28.254335       1 shared_informer.go:320] Caches are synced for PVC protection
	I0203 12:28:33.990530   13136 command_runner.go:130] ! I0203 12:27:28.256579       1 shared_informer.go:320] Caches are synced for service account
	I0203 12:28:33.990530   13136 command_runner.go:130] ! I0203 12:27:28.261559       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:33.990530   13136 command_runner.go:130] ! I0203 12:27:28.272196       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.990530   13136 command_runner.go:130] ! I0203 12:27:28.278627       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m02"
	I0203 12:28:33.990620   13136 command_runner.go:130] ! I0203 12:27:28.278875       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m03"
	I0203 12:28:33.990676   13136 command_runner.go:130] ! I0203 12:27:28.279161       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300"
	I0203 12:28:33.990676   13136 command_runner.go:130] ! I0203 12:27:28.279427       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:33.990676   13136 command_runner.go:130] ! I0203 12:27:28.279877       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.990676   13136 command_runner.go:130] ! I0203 12:27:28.279830       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0203 12:28:33.990738   13136 command_runner.go:130] ! I0203 12:27:28.304983       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:33.990738   13136 command_runner.go:130] ! I0203 12:27:28.305231       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0203 12:28:33.990738   13136 command_runner.go:130] ! I0203 12:27:28.305564       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0203 12:28:33.990738   13136 command_runner.go:130] ! I0203 12:27:28.321623       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0203 12:28:33.990795   13136 command_runner.go:130] ! I0203 12:27:28.355620       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.990795   13136 command_runner.go:130] ! I0203 12:27:28.537851       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="345.769991ms"
	I0203 12:28:33.990795   13136 command_runner.go:130] ! I0203 12:27:28.538124       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="123.5µs"
	I0203 12:28:33.990856   13136 command_runner.go:130] ! I0203 12:27:28.549449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="358.01756ms"
	I0203 12:28:33.990856   13136 command_runner.go:130] ! I0203 12:27:28.551039       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="41.301µs"
	I0203 12:28:33.990856   13136 command_runner.go:130] ! I0203 12:27:38.365008       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.990910   13136 command_runner.go:130] ! I0203 12:28:10.033136       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.990910   13136 command_runner.go:130] ! I0203 12:28:10.034663       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:33.990910   13136 command_runner.go:130] ! I0203 12:28:10.065494       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.990971   13136 command_runner.go:130] ! I0203 12:28:13.309331       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.990971   13136 command_runner.go:130] ! I0203 12:28:18.332821       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.990971   13136 command_runner.go:130] ! I0203 12:28:18.352713       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.991025   13136 command_runner.go:130] ! I0203 12:28:18.408588       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="26.468372ms"
	I0203 12:28:33.991025   13136 command_runner.go:130] ! I0203 12:28:18.409083       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="46.101µs"
	I0203 12:28:33.991025   13136 command_runner.go:130] ! I0203 12:28:23.502598       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.991085   13136 command_runner.go:130] ! I0203 12:28:31.524388       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="21.544593ms"
	I0203 12:28:33.991085   13136 command_runner.go:130] ! I0203 12:28:31.524629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="171.802µs"
	I0203 12:28:33.991139   13136 command_runner.go:130] ! I0203 12:28:31.550980       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="91.601µs"
	I0203 12:28:33.991139   13136 command_runner.go:130] ! I0203 12:28:31.616132       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="36.896902ms"
	I0203 12:28:33.991139   13136 command_runner.go:130] ! I0203 12:28:31.618203       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="115.002µs"
	I0203 12:28:34.009905   13136 logs.go:123] Gathering logs for kindnet [fab2d9be6b5c] ...
	I0203 12:28:34.009905   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fab2d9be6b5c"
	I0203 12:28:34.048684   13136 command_runner.go:130] ! I0203 12:13:59.481747       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.048748   13136 command_runner.go:130] ! I0203 12:13:59.482211       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.048748   13136 command_runner.go:130] ! I0203 12:13:59.482302       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.048748   13136 command_runner.go:130] ! I0203 12:14:09.479387       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.048748   13136 command_runner.go:130] ! I0203 12:14:09.479438       1 main.go:301] handling current node
	I0203 12:28:34.048823   13136 command_runner.go:130] ! I0203 12:14:09.479457       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.048823   13136 command_runner.go:130] ! I0203 12:14:09.479464       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.048823   13136 command_runner.go:130] ! I0203 12:14:09.480145       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.048823   13136 command_runner.go:130] ! I0203 12:14:09.480233       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.048823   13136 command_runner.go:130] ! I0203 12:14:19.488038       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.048883   13136 command_runner.go:130] ! I0203 12:14:19.488073       1 main.go:301] handling current node
	I0203 12:28:34.048883   13136 command_runner.go:130] ! I0203 12:14:19.488090       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.048883   13136 command_runner.go:130] ! I0203 12:14:19.488096       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.048883   13136 command_runner.go:130] ! I0203 12:14:19.488279       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.048883   13136 command_runner.go:130] ! I0203 12:14:19.488286       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.048981   13136 command_runner.go:130] ! I0203 12:14:29.479983       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.048981   13136 command_runner.go:130] ! I0203 12:14:29.480097       1 main.go:301] handling current node
	I0203 12:28:34.048981   13136 command_runner.go:130] ! I0203 12:14:29.480118       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.049049   13136 command_runner.go:130] ! I0203 12:14:29.480126       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.049049   13136 command_runner.go:130] ! I0203 12:14:29.480690       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.049049   13136 command_runner.go:130] ! I0203 12:14:29.480801       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.049111   13136 command_runner.go:130] ! I0203 12:14:39.480046       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.049111   13136 command_runner.go:130] ! I0203 12:14:39.480207       1 main.go:301] handling current node
	I0203 12:28:34.049111   13136 command_runner.go:130] ! I0203 12:14:39.480229       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.049111   13136 command_runner.go:130] ! I0203 12:14:39.480240       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.049187   13136 command_runner.go:130] ! I0203 12:14:39.480703       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.049187   13136 command_runner.go:130] ! I0203 12:14:39.480794       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.049187   13136 command_runner.go:130] ! I0203 12:14:49.479153       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.049250   13136 command_runner.go:130] ! I0203 12:14:49.479261       1 main.go:301] handling current node
	I0203 12:28:34.049250   13136 command_runner.go:130] ! I0203 12:14:49.479283       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.049250   13136 command_runner.go:130] ! I0203 12:14:49.479292       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.049250   13136 command_runner.go:130] ! I0203 12:14:49.479491       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.049250   13136 command_runner.go:130] ! I0203 12:14:49.479575       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.049336   13136 command_runner.go:130] ! I0203 12:14:59.478982       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.049366   13136 command_runner.go:130] ! I0203 12:14:59.479132       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.049366   13136 command_runner.go:130] ! I0203 12:14:59.479435       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.049366   13136 command_runner.go:130] ! I0203 12:14:59.479519       1 main.go:301] handling current node
	I0203 12:28:34.049366   13136 command_runner.go:130] ! I0203 12:14:59.479535       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.049426   13136 command_runner.go:130] ! I0203 12:14:59.479541       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.049426   13136 command_runner.go:130] ! I0203 12:15:09.479541       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.049490   13136 command_runner.go:130] ! I0203 12:15:09.479593       1 main.go:301] handling current node
	I0203 12:28:34.049490   13136 command_runner.go:130] ! I0203 12:15:09.479613       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.049490   13136 command_runner.go:130] ! I0203 12:15:09.479621       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.049490   13136 command_runner.go:130] ! I0203 12:15:09.480303       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.049557   13136 command_runner.go:130] ! I0203 12:15:09.480382       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.049557   13136 command_runner.go:130] ! I0203 12:15:19.488389       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.049557   13136 command_runner.go:130] ! I0203 12:15:19.488489       1 main.go:301] handling current node
	I0203 12:28:34.049617   13136 command_runner.go:130] ! I0203 12:15:19.488509       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.049684   13136 command_runner.go:130] ! I0203 12:15:19.488517       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.049684   13136 command_runner.go:130] ! I0203 12:15:19.489046       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.049684   13136 command_runner.go:130] ! I0203 12:15:19.489142       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.049745   13136 command_runner.go:130] ! I0203 12:15:29.481025       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.049745   13136 command_runner.go:130] ! I0203 12:15:29.481131       1 main.go:301] handling current node
	I0203 12:28:34.049745   13136 command_runner.go:130] ! I0203 12:15:29.481151       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.049745   13136 command_runner.go:130] ! I0203 12:15:29.481158       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.049745   13136 command_runner.go:130] ! I0203 12:15:29.481350       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.049829   13136 command_runner.go:130] ! I0203 12:15:29.481373       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.049859   13136 command_runner.go:130] ! I0203 12:15:39.487726       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.049859   13136 command_runner.go:130] ! I0203 12:15:39.487893       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.049859   13136 command_runner.go:130] ! I0203 12:15:39.488092       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.049902   13136 command_runner.go:130] ! I0203 12:15:39.488105       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.049902   13136 command_runner.go:130] ! I0203 12:15:39.488232       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.049902   13136 command_runner.go:130] ! I0203 12:15:39.488259       1 main.go:301] handling current node
	I0203 12:28:34.049969   13136 command_runner.go:130] ! I0203 12:15:49.484117       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.049969   13136 command_runner.go:130] ! I0203 12:15:49.484177       1 main.go:301] handling current node
	I0203 12:28:34.049969   13136 command_runner.go:130] ! I0203 12:15:49.484234       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.049969   13136 command_runner.go:130] ! I0203 12:15:49.484314       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.050044   13136 command_runner.go:130] ! I0203 12:15:49.485204       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.050044   13136 command_runner.go:130] ! I0203 12:15:49.485392       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.050044   13136 command_runner.go:130] ! I0203 12:15:59.481092       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.050044   13136 command_runner.go:130] ! I0203 12:15:59.481195       1 main.go:301] handling current node
	I0203 12:28:34.050109   13136 command_runner.go:130] ! I0203 12:15:59.481218       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.050109   13136 command_runner.go:130] ! I0203 12:15:59.481226       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.050109   13136 command_runner.go:130] ! I0203 12:15:59.481484       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.050193   13136 command_runner.go:130] ! I0203 12:15:59.481510       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.050193   13136 command_runner.go:130] ! I0203 12:16:09.480009       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.050222   13136 command_runner.go:130] ! I0203 12:16:09.480236       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.050265   13136 command_runner.go:130] ! I0203 12:16:09.480645       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.050265   13136 command_runner.go:130] ! I0203 12:16:09.480840       1 main.go:301] handling current node
	I0203 12:28:34.050265   13136 command_runner.go:130] ! I0203 12:16:09.480969       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.050265   13136 command_runner.go:130] ! I0203 12:16:09.481255       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.050342   13136 command_runner.go:130] ! I0203 12:16:19.479435       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.050342   13136 command_runner.go:130] ! I0203 12:16:19.479557       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.050342   13136 command_runner.go:130] ! I0203 12:16:19.479760       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.050342   13136 command_runner.go:130] ! I0203 12:16:19.479977       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.050342   13136 command_runner.go:130] ! I0203 12:16:19.480328       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.050441   13136 command_runner.go:130] ! I0203 12:16:19.480522       1 main.go:301] handling current node
	I0203 12:28:34.050441   13136 command_runner.go:130] ! I0203 12:16:29.479113       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.050441   13136 command_runner.go:130] ! I0203 12:16:29.479221       1 main.go:301] handling current node
	I0203 12:28:34.050506   13136 command_runner.go:130] ! I0203 12:16:29.479267       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.050506   13136 command_runner.go:130] ! I0203 12:16:29.479321       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.050506   13136 command_runner.go:130] ! I0203 12:16:29.479572       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.050575   13136 command_runner.go:130] ! I0203 12:16:29.479670       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.050575   13136 command_runner.go:130] ! I0203 12:16:39.484562       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.050653   13136 command_runner.go:130] ! I0203 12:16:39.484671       1 main.go:301] handling current node
	I0203 12:28:34.050653   13136 command_runner.go:130] ! I0203 12:16:39.484693       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.050653   13136 command_runner.go:130] ! I0203 12:16:39.484700       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.050721   13136 command_runner.go:130] ! I0203 12:16:39.485166       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.050721   13136 command_runner.go:130] ! I0203 12:16:39.485259       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.050721   13136 command_runner.go:130] ! I0203 12:16:49.488261       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.050721   13136 command_runner.go:130] ! I0203 12:16:49.488416       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.050782   13136 command_runner.go:130] ! I0203 12:16:49.488709       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.050782   13136 command_runner.go:130] ! I0203 12:16:49.488783       1 main.go:301] handling current node
	I0203 12:28:34.050782   13136 command_runner.go:130] ! I0203 12:16:49.488801       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.050782   13136 command_runner.go:130] ! I0203 12:16:49.488807       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.050782   13136 command_runner.go:130] ! I0203 12:16:59.479138       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.050859   13136 command_runner.go:130] ! I0203 12:16:59.479218       1 main.go:301] handling current node
	I0203 12:28:34.050859   13136 command_runner.go:130] ! I0203 12:16:59.479312       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.050859   13136 command_runner.go:130] ! I0203 12:16:59.479448       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.050859   13136 command_runner.go:130] ! I0203 12:16:59.480031       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.050918   13136 command_runner.go:130] ! I0203 12:16:59.480132       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.050918   13136 command_runner.go:130] ! I0203 12:17:09.479412       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.050918   13136 command_runner.go:130] ! I0203 12:17:09.479454       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.050918   13136 command_runner.go:130] ! I0203 12:17:09.479652       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.051002   13136 command_runner.go:130] ! I0203 12:17:09.479680       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.051033   13136 command_runner.go:130] ! I0203 12:17:09.479774       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.051033   13136 command_runner.go:130] ! I0203 12:17:09.479785       1 main.go:301] handling current node
	I0203 12:28:34.051033   13136 command_runner.go:130] ! I0203 12:17:19.481248       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.051079   13136 command_runner.go:130] ! I0203 12:17:19.481299       1 main.go:301] handling current node
	I0203 12:28:34.051079   13136 command_runner.go:130] ! I0203 12:17:19.481317       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.051079   13136 command_runner.go:130] ! I0203 12:17:19.481324       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.051079   13136 command_runner.go:130] ! I0203 12:17:19.481727       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.051148   13136 command_runner.go:130] ! I0203 12:17:19.481754       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.051148   13136 command_runner.go:130] ! I0203 12:17:29.479244       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.051148   13136 command_runner.go:130] ! I0203 12:17:29.479364       1 main.go:301] handling current node
	I0203 12:28:34.051148   13136 command_runner.go:130] ! I0203 12:17:29.479384       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.051148   13136 command_runner.go:130] ! I0203 12:17:29.479392       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.051238   13136 command_runner.go:130] ! I0203 12:17:29.480340       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.051238   13136 command_runner.go:130] ! I0203 12:17:29.480488       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.051268   13136 command_runner.go:130] ! I0203 12:17:39.486004       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.051268   13136 command_runner.go:130] ! I0203 12:17:39.486109       1 main.go:301] handling current node
	I0203 12:28:34.051268   13136 command_runner.go:130] ! I0203 12:17:39.486129       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.051268   13136 command_runner.go:130] ! I0203 12:17:39.486137       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.051268   13136 command_runner.go:130] ! I0203 12:17:39.487056       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.051326   13136 command_runner.go:130] ! I0203 12:17:39.487145       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.051326   13136 command_runner.go:130] ! I0203 12:17:49.479174       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.051326   13136 command_runner.go:130] ! I0203 12:17:49.479407       1 main.go:301] handling current node
	I0203 12:28:34.051326   13136 command_runner.go:130] ! I0203 12:17:49.479529       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.051377   13136 command_runner.go:130] ! I0203 12:17:49.479564       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.051377   13136 command_runner.go:130] ! I0203 12:17:49.480448       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.051377   13136 command_runner.go:130] ! I0203 12:17:49.480489       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.051377   13136 command_runner.go:130] ! I0203 12:17:59.479178       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.051437   13136 command_runner.go:130] ! I0203 12:17:59.479464       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.051437   13136 command_runner.go:130] ! I0203 12:17:59.479683       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.051437   13136 command_runner.go:130] ! I0203 12:17:59.479843       1 main.go:301] handling current node
	I0203 12:28:34.051485   13136 command_runner.go:130] ! I0203 12:17:59.479900       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.051485   13136 command_runner.go:130] ! I0203 12:17:59.479909       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.051485   13136 command_runner.go:130] ! I0203 12:18:09.479760       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.051485   13136 command_runner.go:130] ! I0203 12:18:09.479855       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.051485   13136 command_runner.go:130] ! I0203 12:18:09.480291       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.051552   13136 command_runner.go:130] ! I0203 12:18:09.480340       1 main.go:301] handling current node
	I0203 12:28:34.051552   13136 command_runner.go:130] ! I0203 12:18:09.480365       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.051552   13136 command_runner.go:130] ! I0203 12:18:09.480374       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.051552   13136 command_runner.go:130] ! I0203 12:18:19.487177       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.053448   13136 command_runner.go:130] ! I0203 12:18:19.487393       1 main.go:301] handling current node
	I0203 12:28:34.053539   13136 command_runner.go:130] ! I0203 12:18:19.487478       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.053569   13136 command_runner.go:130] ! I0203 12:18:19.487578       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.053569   13136 command_runner.go:130] ! I0203 12:18:19.488002       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.053634   13136 command_runner.go:130] ! I0203 12:18:19.488201       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.053634   13136 command_runner.go:130] ! I0203 12:18:29.479665       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.053663   13136 command_runner.go:130] ! I0203 12:18:29.479790       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.053663   13136 command_runner.go:130] ! I0203 12:18:29.480229       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.053718   13136 command_runner.go:130] ! I0203 12:18:29.480333       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.053718   13136 command_runner.go:130] ! I0203 12:18:29.480694       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.053718   13136 command_runner.go:130] ! I0203 12:18:29.480800       1 main.go:301] handling current node
	I0203 12:28:34.053718   13136 command_runner.go:130] ! I0203 12:18:39.478894       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.053782   13136 command_runner.go:130] ! I0203 12:18:39.479048       1 main.go:301] handling current node
	I0203 12:28:34.053782   13136 command_runner.go:130] ! I0203 12:18:39.479069       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.053782   13136 command_runner.go:130] ! I0203 12:18:39.479077       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.053782   13136 command_runner.go:130] ! I0203 12:18:39.479735       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.053782   13136 command_runner.go:130] ! I0203 12:18:39.479846       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.053869   13136 command_runner.go:130] ! I0203 12:18:49.487084       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.053899   13136 command_runner.go:130] ! I0203 12:18:49.487121       1 main.go:301] handling current node
	I0203 12:28:34.053899   13136 command_runner.go:130] ! I0203 12:18:49.487139       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.053899   13136 command_runner.go:130] ! I0203 12:18:49.487146       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.053962   13136 command_runner.go:130] ! I0203 12:18:49.487825       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.053992   13136 command_runner.go:130] ! I0203 12:18:49.488251       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.053992   13136 command_runner.go:130] ! I0203 12:18:59.479844       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.054057   13136 command_runner.go:130] ! I0203 12:18:59.479986       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.054088   13136 command_runner.go:130] ! I0203 12:18:59.480763       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.054088   13136 command_runner.go:130] ! I0203 12:18:59.480852       1 main.go:301] handling current node
	I0203 12:28:34.054088   13136 command_runner.go:130] ! I0203 12:18:59.480911       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.054088   13136 command_runner.go:130] ! I0203 12:18:59.480921       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.054161   13136 command_runner.go:130] ! I0203 12:19:09.479931       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.054190   13136 command_runner.go:130] ! I0203 12:19:09.480043       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.054190   13136 command_runner.go:130] ! I0203 12:19:09.480242       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.054190   13136 command_runner.go:130] ! I0203 12:19:09.480487       1 main.go:301] handling current node
	I0203 12:28:34.054190   13136 command_runner.go:130] ! I0203 12:19:09.480506       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.054190   13136 command_runner.go:130] ! I0203 12:19:09.480516       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.054354   13136 command_runner.go:130] ! I0203 12:19:19.486529       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.054396   13136 command_runner.go:130] ! I0203 12:19:19.486564       1 main.go:301] handling current node
	I0203 12:28:34.054423   13136 command_runner.go:130] ! I0203 12:19:19.486583       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.054423   13136 command_runner.go:130] ! I0203 12:19:19.486590       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.054423   13136 command_runner.go:130] ! I0203 12:19:19.486994       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.054423   13136 command_runner.go:130] ! I0203 12:19:19.487009       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.054423   13136 command_runner.go:130] ! I0203 12:19:29.480898       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.054423   13136 command_runner.go:130] ! I0203 12:19:29.481006       1 main.go:301] handling current node
	I0203 12:28:34.054517   13136 command_runner.go:130] ! I0203 12:19:29.481028       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.054517   13136 command_runner.go:130] ! I0203 12:19:29.481037       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.054517   13136 command_runner.go:130] ! I0203 12:19:29.481233       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.054517   13136 command_runner.go:130] ! I0203 12:19:29.481256       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.054517   13136 command_runner.go:130] ! I0203 12:19:39.486219       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.054517   13136 command_runner.go:130] ! I0203 12:19:39.486253       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.054517   13136 command_runner.go:130] ! I0203 12:19:39.486535       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.054630   13136 command_runner.go:130] ! I0203 12:19:39.486547       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.054630   13136 command_runner.go:130] ! I0203 12:19:39.486661       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.054630   13136 command_runner.go:130] ! I0203 12:19:39.486668       1 main.go:301] handling current node
	I0203 12:28:34.054630   13136 command_runner.go:130] ! I0203 12:19:49.486894       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.054630   13136 command_runner.go:130] ! I0203 12:19:49.487004       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.054630   13136 command_runner.go:130] ! I0203 12:19:49.487855       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.054741   13136 command_runner.go:130] ! I0203 12:19:49.488255       1 main.go:301] handling current node
	I0203 12:28:34.054741   13136 command_runner.go:130] ! I0203 12:19:49.488415       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.054741   13136 command_runner.go:130] ! I0203 12:19:49.488578       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.054741   13136 command_runner.go:130] ! I0203 12:19:59.480029       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.054741   13136 command_runner.go:130] ! I0203 12:19:59.480068       1 main.go:301] handling current node
	I0203 12:28:34.054828   13136 command_runner.go:130] ! I0203 12:19:59.480087       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.054828   13136 command_runner.go:130] ! I0203 12:19:59.480095       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.054858   13136 command_runner.go:130] ! I0203 12:19:59.480976       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.054858   13136 command_runner.go:130] ! I0203 12:19:59.481279       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.054894   13136 command_runner.go:130] ! I0203 12:20:09.480108       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.054894   13136 command_runner.go:130] ! I0203 12:20:09.480217       1 main.go:301] handling current node
	I0203 12:28:34.054894   13136 command_runner.go:130] ! I0203 12:20:09.480237       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.054966   13136 command_runner.go:130] ! I0203 12:20:09.480245       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.054994   13136 command_runner.go:130] ! I0203 12:20:09.480661       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.054994   13136 command_runner.go:130] ! I0203 12:20:09.480744       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.054994   13136 command_runner.go:130] ! I0203 12:20:19.479758       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055065   13136 command_runner.go:130] ! I0203 12:20:19.480248       1 main.go:301] handling current node
	I0203 12:28:34.055065   13136 command_runner.go:130] ! I0203 12:20:19.480343       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055065   13136 command_runner.go:130] ! I0203 12:20:19.480356       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055065   13136 command_runner.go:130] ! I0203 12:20:19.480786       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055065   13136 command_runner.go:130] ! I0203 12:20:19.480803       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055132   13136 command_runner.go:130] ! I0203 12:20:29.479490       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055132   13136 command_runner.go:130] ! I0203 12:20:29.479617       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055132   13136 command_runner.go:130] ! I0203 12:20:29.480064       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055132   13136 command_runner.go:130] ! I0203 12:20:29.480169       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055132   13136 command_runner.go:130] ! I0203 12:20:29.480353       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055132   13136 command_runner.go:130] ! I0203 12:20:29.480368       1 main.go:301] handling current node
	I0203 12:28:34.055210   13136 command_runner.go:130] ! I0203 12:20:39.479641       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055210   13136 command_runner.go:130] ! I0203 12:20:39.479836       1 main.go:301] handling current node
	I0203 12:28:34.055210   13136 command_runner.go:130] ! I0203 12:20:39.479918       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055210   13136 command_runner.go:130] ! I0203 12:20:39.480224       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055274   13136 command_runner.go:130] ! I0203 12:20:39.480721       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055274   13136 command_runner.go:130] ! I0203 12:20:39.480751       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055274   13136 command_runner.go:130] ! I0203 12:20:49.479128       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055274   13136 command_runner.go:130] ! I0203 12:20:49.479242       1 main.go:301] handling current node
	I0203 12:28:34.055274   13136 command_runner.go:130] ! I0203 12:20:49.479263       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055274   13136 command_runner.go:130] ! I0203 12:20:49.479271       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055350   13136 command_runner.go:130] ! I0203 12:20:49.479687       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055350   13136 command_runner.go:130] ! I0203 12:20:49.479937       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055350   13136 command_runner.go:130] ! I0203 12:20:59.485967       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055350   13136 command_runner.go:130] ! I0203 12:20:59.486008       1 main.go:301] handling current node
	I0203 12:28:34.055350   13136 command_runner.go:130] ! I0203 12:20:59.486029       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055413   13136 command_runner.go:130] ! I0203 12:20:59.486037       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055413   13136 command_runner.go:130] ! I0203 12:20:59.486327       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055413   13136 command_runner.go:130] ! I0203 12:20:59.486342       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055413   13136 command_runner.go:130] ! I0203 12:21:09.479406       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055413   13136 command_runner.go:130] ! I0203 12:21:09.479537       1 main.go:301] handling current node
	I0203 12:28:34.055413   13136 command_runner.go:130] ! I0203 12:21:09.479560       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055489   13136 command_runner.go:130] ! I0203 12:21:09.479571       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055489   13136 command_runner.go:130] ! I0203 12:21:09.480561       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055489   13136 command_runner.go:130] ! I0203 12:21:09.480668       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055489   13136 command_runner.go:130] ! I0203 12:21:19.486059       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055552   13136 command_runner.go:130] ! I0203 12:21:19.486172       1 main.go:301] handling current node
	I0203 12:28:34.055552   13136 command_runner.go:130] ! I0203 12:21:19.486192       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055552   13136 command_runner.go:130] ! I0203 12:21:19.486199       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055593   13136 command_runner.go:130] ! I0203 12:21:19.486776       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055593   13136 command_runner.go:130] ! I0203 12:21:19.486913       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055593   13136 command_runner.go:130] ! I0203 12:21:29.479291       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055593   13136 command_runner.go:130] ! I0203 12:21:29.479421       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055634   13136 command_runner.go:130] ! I0203 12:21:29.480168       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055634   13136 command_runner.go:130] ! I0203 12:21:29.480268       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055634   13136 command_runner.go:130] ! I0203 12:21:29.480621       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055634   13136 command_runner.go:130] ! I0203 12:21:29.480720       1 main.go:301] handling current node
	I0203 12:28:34.055634   13136 command_runner.go:130] ! I0203 12:21:39.479561       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055693   13136 command_runner.go:130] ! I0203 12:21:39.479684       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055693   13136 command_runner.go:130] ! I0203 12:21:39.480019       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055693   13136 command_runner.go:130] ! I0203 12:21:39.480130       1 main.go:301] handling current node
	I0203 12:28:34.055693   13136 command_runner.go:130] ! I0203 12:21:39.480149       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055693   13136 command_runner.go:130] ! I0203 12:21:39.480157       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055693   13136 command_runner.go:130] ! I0203 12:21:49.485937       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055788   13136 command_runner.go:130] ! I0203 12:21:49.486015       1 main.go:301] handling current node
	I0203 12:28:34.055788   13136 command_runner.go:130] ! I0203 12:21:49.486511       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055788   13136 command_runner.go:130] ! I0203 12:21:49.486846       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055788   13136 command_runner.go:130] ! I0203 12:21:49.487441       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:21:49.487470       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:21:59.479224       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:21:59.479388       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:21:59.479615       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:21:59.479639       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:21:59.479828       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:21:59.479942       1 main.go:301] handling current node
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:09.479352       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:09.479745       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:09.480390       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:09.480426       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:09.480922       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:09.481129       1 main.go:301] handling current node
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:19.480040       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:19.480088       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:19.480938       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:19.480972       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:19.481966       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:19.482194       1 main.go:301] handling current node
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:29.479113       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:29.479222       1 main.go:301] handling current node
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:29.479243       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:29.479251       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:29.479605       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:29.479637       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:39.488770       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:39.488806       1 main.go:301] handling current node
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:39.488823       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:39.488830       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:39.489296       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:39.489449       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:49.479056       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:49.479097       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:49.479550       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:49.479661       1 main.go:301] handling current node
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:49.479679       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:49.479687       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:59.478931       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:59.479023       1 main.go:301] handling current node
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:59.479077       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:59.479136       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:59.479510       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:59.479604       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:59.479991       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.0.54 Flags: [] Table: 0 Realm: 0} 
	I0203 12:28:34.056380   13136 command_runner.go:130] ! I0203 12:23:09.479836       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.056380   13136 command_runner.go:130] ! I0203 12:23:09.479965       1 main.go:301] handling current node
	I0203 12:28:34.056380   13136 command_runner.go:130] ! I0203 12:23:09.479985       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.056380   13136 command_runner.go:130] ! I0203 12:23:09.479997       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.056434   13136 command_runner.go:130] ! I0203 12:23:09.480363       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.056434   13136 command_runner.go:130] ! I0203 12:23:09.480514       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.056477   13136 command_runner.go:130] ! I0203 12:23:19.480167       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.056477   13136 command_runner.go:130] ! I0203 12:23:19.480217       1 main.go:301] handling current node
	I0203 12:28:34.056517   13136 command_runner.go:130] ! I0203 12:23:19.480239       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.056517   13136 command_runner.go:130] ! I0203 12:23:19.480245       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.056517   13136 command_runner.go:130] ! I0203 12:23:19.480628       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.056553   13136 command_runner.go:130] ! I0203 12:23:19.480750       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.056553   13136 command_runner.go:130] ! I0203 12:23:29.488733       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.056605   13136 command_runner.go:130] ! I0203 12:23:29.489234       1 main.go:301] handling current node
	I0203 12:28:34.056605   13136 command_runner.go:130] ! I0203 12:23:29.489474       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.056636   13136 command_runner.go:130] ! I0203 12:23:29.489946       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.056636   13136 command_runner.go:130] ! I0203 12:23:29.490535       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.056636   13136 command_runner.go:130] ! I0203 12:23:29.490635       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.056677   13136 command_runner.go:130] ! I0203 12:23:39.479240       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.056677   13136 command_runner.go:130] ! I0203 12:23:39.479359       1 main.go:301] handling current node
	I0203 12:28:34.056716   13136 command_runner.go:130] ! I0203 12:23:39.479382       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.056716   13136 command_runner.go:130] ! I0203 12:23:39.479391       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.056716   13136 command_runner.go:130] ! I0203 12:23:39.479635       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.056716   13136 command_runner.go:130] ! I0203 12:23:39.479662       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.056716   13136 command_runner.go:130] ! I0203 12:23:49.484665       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.056784   13136 command_runner.go:130] ! I0203 12:23:49.484760       1 main.go:301] handling current node
	I0203 12:28:34.056784   13136 command_runner.go:130] ! I0203 12:23:49.484814       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.056784   13136 command_runner.go:130] ! I0203 12:23:49.484827       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.056784   13136 command_runner.go:130] ! I0203 12:23:49.485522       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.056784   13136 command_runner.go:130] ! I0203 12:23:49.485609       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.056784   13136 command_runner.go:130] ! I0203 12:23:59.488178       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.056868   13136 command_runner.go:130] ! I0203 12:23:59.488328       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.056868   13136 command_runner.go:130] ! I0203 12:23:59.488725       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.056868   13136 command_runner.go:130] ! I0203 12:23:59.488825       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.056868   13136 command_runner.go:130] ! I0203 12:23:59.489199       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.056868   13136 command_runner.go:130] ! I0203 12:23:59.489288       1 main.go:301] handling current node
	I0203 12:28:34.056932   13136 command_runner.go:130] ! I0203 12:24:09.478924       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.056932   13136 command_runner.go:130] ! I0203 12:24:09.478990       1 main.go:301] handling current node
	I0203 12:28:34.056932   13136 command_runner.go:130] ! I0203 12:24:09.479043       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.056986   13136 command_runner.go:130] ! I0203 12:24:09.479072       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.056986   13136 command_runner.go:130] ! I0203 12:24:09.479342       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.056986   13136 command_runner.go:130] ! I0203 12:24:09.479511       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.056986   13136 command_runner.go:130] ! I0203 12:24:19.485161       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.056986   13136 command_runner.go:130] ! I0203 12:24:19.485331       1 main.go:301] handling current node
	I0203 12:28:34.057048   13136 command_runner.go:130] ! I0203 12:24:19.485367       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.057048   13136 command_runner.go:130] ! I0203 12:24:19.485388       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.057048   13136 command_runner.go:130] ! I0203 12:24:19.486434       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.057104   13136 command_runner.go:130] ! I0203 12:24:19.486547       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.057104   13136 command_runner.go:130] ! I0203 12:24:29.479544       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.057104   13136 command_runner.go:130] ! I0203 12:24:29.480058       1 main.go:301] handling current node
	I0203 12:28:34.057104   13136 command_runner.go:130] ! I0203 12:24:29.480294       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.057262   13136 command_runner.go:130] ! I0203 12:24:29.480571       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.057262   13136 command_runner.go:130] ! I0203 12:24:29.482395       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.057262   13136 command_runner.go:130] ! I0203 12:24:29.482495       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.057262   13136 command_runner.go:130] ! I0203 12:24:39.487057       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.057334   13136 command_runner.go:130] ! I0203 12:24:39.487164       1 main.go:301] handling current node
	I0203 12:28:34.057334   13136 command_runner.go:130] ! I0203 12:24:39.487184       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.057362   13136 command_runner.go:130] ! I0203 12:24:39.487192       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.057362   13136 command_runner.go:130] ! I0203 12:24:39.487371       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.057396   13136 command_runner.go:130] ! I0203 12:24:39.487395       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.057396   13136 command_runner.go:130] ! I0203 12:24:49.479049       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.057396   13136 command_runner.go:130] ! I0203 12:24:49.479126       1 main.go:301] handling current node
	I0203 12:28:34.057396   13136 command_runner.go:130] ! I0203 12:24:49.479266       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.057456   13136 command_runner.go:130] ! I0203 12:24:49.479354       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.057456   13136 command_runner.go:130] ! I0203 12:24:49.480131       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.057456   13136 command_runner.go:130] ! I0203 12:24:49.480242       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.057456   13136 command_runner.go:130] ! I0203 12:24:59.479305       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.057456   13136 command_runner.go:130] ! I0203 12:24:59.479727       1 main.go:301] handling current node
	I0203 12:28:34.057515   13136 command_runner.go:130] ! I0203 12:24:59.479826       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.057515   13136 command_runner.go:130] ! I0203 12:24:59.479839       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.057515   13136 command_runner.go:130] ! I0203 12:24:59.480314       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.057568   13136 command_runner.go:130] ! I0203 12:24:59.480509       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.075988   13136 logs.go:123] Gathering logs for dmesg ...
	I0203 12:28:34.075988   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 12:28:34.098483   13136 command_runner.go:130] > [Feb 3 12:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0203 12:28:34.098483   13136 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0203 12:28:34.098483   13136 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0203 12:28:34.098483   13136 command_runner.go:130] > [  +0.106774] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0203 12:28:34.098483   13136 command_runner.go:130] > [  +0.023238] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0203 12:28:34.099503   13136 command_runner.go:130] > [  +0.000004] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0203 12:28:34.099626   13136 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0203 12:28:34.099676   13136 command_runner.go:130] > [  +0.060292] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0203 12:28:34.099732   13136 command_runner.go:130] > [  +0.024825] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0203 12:28:34.099732   13136 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0203 12:28:34.099732   13136 command_runner.go:130] > [  +6.580601] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0203 12:28:34.099732   13136 command_runner.go:130] > [  +1.325226] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0203 12:28:34.099787   13136 command_runner.go:130] > [  +1.308770] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0203 12:28:34.099787   13136 command_runner.go:130] > [Feb 3 12:26] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0203 12:28:34.099787   13136 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0203 12:28:34.099787   13136 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0203 12:28:34.099846   13136 command_runner.go:130] > [ +44.595913] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	I0203 12:28:34.099846   13136 command_runner.go:130] > [  +0.095070] kauditd_printk_skb: 4 callbacks suppressed
	I0203 12:28:34.099846   13136 command_runner.go:130] > [  +0.080250] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	I0203 12:28:34.099892   13136 command_runner.go:130] > [Feb 3 12:27] systemd-fstab-generator[1026]: Ignoring "noauto" option for root device
	I0203 12:28:34.099936   13136 command_runner.go:130] > [  +0.111210] kauditd_printk_skb: 75 callbacks suppressed
	I0203 12:28:34.099970   13136 command_runner.go:130] > [  +0.499536] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	I0203 12:28:34.099989   13136 command_runner.go:130] > [  +0.200113] systemd-fstab-generator[1078]: Ignoring "noauto" option for root device
	I0203 12:28:34.099989   13136 command_runner.go:130] > [  +0.221690] systemd-fstab-generator[1092]: Ignoring "noauto" option for root device
	I0203 12:28:34.099989   13136 command_runner.go:130] > [  +2.970290] systemd-fstab-generator[1331]: Ignoring "noauto" option for root device
	I0203 12:28:34.099989   13136 command_runner.go:130] > [  +0.201836] systemd-fstab-generator[1343]: Ignoring "noauto" option for root device
	I0203 12:28:34.100075   13136 command_runner.go:130] > [  +0.192903] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	I0203 12:28:34.100075   13136 command_runner.go:130] > [  +0.251653] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	I0203 12:28:34.100075   13136 command_runner.go:130] > [  +0.851149] systemd-fstab-generator[1495]: Ignoring "noauto" option for root device
	I0203 12:28:34.100125   13136 command_runner.go:130] > [  +0.100990] kauditd_printk_skb: 206 callbacks suppressed
	I0203 12:28:34.100125   13136 command_runner.go:130] > [  +3.722313] systemd-fstab-generator[1639]: Ignoring "noauto" option for root device
	I0203 12:28:34.100125   13136 command_runner.go:130] > [  +1.365001] kauditd_printk_skb: 44 callbacks suppressed
	I0203 12:28:34.100160   13136 command_runner.go:130] > [  +5.747815] kauditd_printk_skb: 30 callbacks suppressed
	I0203 12:28:34.100160   13136 command_runner.go:130] > [  +3.773287] systemd-fstab-generator[2531]: Ignoring "noauto" option for root device
	I0203 12:28:34.100160   13136 command_runner.go:130] > [ +27.270277] kauditd_printk_skb: 70 callbacks suppressed
	I0203 12:28:36.611776   13136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 12:28:36.636811   13136 command_runner.go:130] > 1987
	I0203 12:28:36.636811   13136 api_server.go:72] duration metric: took 1m6.4297971s to wait for apiserver process to appear ...
	I0203 12:28:36.636811   13136 api_server.go:88] waiting for apiserver healthz status ...
	I0203 12:28:36.644395   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 12:28:36.671522   13136 command_runner.go:130] > 6c19e0a0ba9c
	I0203 12:28:36.672330   13136 logs.go:282] 1 containers: [6c19e0a0ba9c]
	I0203 12:28:36.679417   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 12:28:36.708730   13136 command_runner.go:130] > 09707a862965
	I0203 12:28:36.708842   13136 logs.go:282] 1 containers: [09707a862965]
	I0203 12:28:36.715321   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 12:28:36.741336   13136 command_runner.go:130] > edb5f00f1042
	I0203 12:28:36.741336   13136 command_runner.go:130] > fe91a8d012ae
	I0203 12:28:36.741336   13136 logs.go:282] 2 containers: [edb5f00f1042 fe91a8d012ae]
	I0203 12:28:36.749323   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 12:28:36.771595   13136 command_runner.go:130] > 2e43c2ecb4a9
	I0203 12:28:36.771595   13136 command_runner.go:130] > 88c40ca9aa3c
	I0203 12:28:36.773223   13136 logs.go:282] 2 containers: [2e43c2ecb4a9 88c40ca9aa3c]
	I0203 12:28:36.779219   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 12:28:36.805347   13136 command_runner.go:130] > cf33452e7244
	I0203 12:28:36.805347   13136 command_runner.go:130] > c6dc514e98f6
	I0203 12:28:36.806760   13136 logs.go:282] 2 containers: [cf33452e7244 c6dc514e98f6]
	I0203 12:28:36.813596   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 12:28:36.837656   13136 command_runner.go:130] > fa5ab1df8985
	I0203 12:28:36.837656   13136 command_runner.go:130] > 8ade10c0fb09
	I0203 12:28:36.839592   13136 logs.go:282] 2 containers: [fa5ab1df8985 8ade10c0fb09]
	I0203 12:28:36.847564   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0203 12:28:36.873872   13136 command_runner.go:130] > 644890f5738e
	I0203 12:28:36.874445   13136 command_runner.go:130] > fab2d9be6b5c
	I0203 12:28:36.874526   13136 logs.go:282] 2 containers: [644890f5738e fab2d9be6b5c]
	I0203 12:28:36.874625   13136 logs.go:123] Gathering logs for kindnet [644890f5738e] ...
	I0203 12:28:36.874625   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 644890f5738e"
	I0203 12:28:36.901490   13136 command_runner.go:130] ! I0203 12:27:27.922584       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0203 12:28:36.901635   13136 command_runner.go:130] ! I0203 12:27:27.925544       1 main.go:139] hostIP = 172.25.12.244
	I0203 12:28:36.901716   13136 command_runner.go:130] ! podIP = 172.25.12.244
	I0203 12:28:36.901716   13136 command_runner.go:130] ! I0203 12:27:27.925723       1 main.go:148] setting mtu 1500 for CNI 
	I0203 12:28:36.901716   13136 command_runner.go:130] ! I0203 12:27:27.925791       1 main.go:178] kindnetd IP family: "ipv4"
	I0203 12:28:36.901716   13136 command_runner.go:130] ! I0203 12:27:27.925960       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0203 12:28:36.901716   13136 command_runner.go:130] ! I0203 12:27:28.656536       1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
	I0203 12:28:36.901797   13136 command_runner.go:130] ! add table inet kindnet-network-policies
	I0203 12:28:36.901797   13136 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:36.901859   13136 command_runner.go:130] ! , skipping network policies
	I0203 12:28:36.901882   13136 command_runner.go:130] ! W0203 12:27:58.664159       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0203 12:28:36.901910   13136 command_runner.go:130] ! E0203 12:27:58.664461       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:08.665271       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:08.665332       1 main.go:301] handling current node
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:08.666606       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:08.666704       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:08.667036       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.25.8.35 Flags: [] Table: 0 Realm: 0} 
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:08.667510       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:08.667530       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:08.668238       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.0.54 Flags: [] Table: 0 Realm: 0} 
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:18.657872       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:18.658001       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:18.658271       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:18.658397       1 main.go:301] handling current node
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:18.658413       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:18.658420       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:28.657620       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:28.658189       1 main.go:301] handling current node
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:28.658424       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:28.658517       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:28.658702       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:28.659037       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:36.905748   13136 logs.go:123] Gathering logs for Docker ...
	I0203 12:28:36.905748   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0203 12:28:36.938198   13136 command_runner.go:130] > Feb 03 12:25:59 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:36.938198   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:36.938198   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:36.938721   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:36.938721   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0203 12:28:36.938721   13136 command_runner.go:130] > Feb 03 12:26:00 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:36.938721   13136 command_runner.go:130] > Feb 03 12:26:00 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:36.938775   13136 command_runner.go:130] > Feb 03 12:26:00 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:36.938819   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0203 12:28:36.938819   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0203 12:28:36.938819   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:36.938819   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:36.938819   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:36.938819   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:36.938919   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0203 12:28:36.938919   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:36.938919   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:36.938919   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:36.938989   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0203 12:28:36.938989   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0203 12:28:36.938989   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:36.938989   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:36.938989   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:36.939058   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:36.939058   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0203 12:28:36.939058   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:36.939127   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:36.939127   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:36.939127   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0203 12:28:36.939127   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0203 12:28:36.939193   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0203 12:28:36.939193   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:36.939193   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:36.939258   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 systemd[1]: Starting Docker Application Container Engine...
	I0203 12:28:36.939258   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[651]: time="2025-02-03T12:26:45.380727146Z" level=info msg="Starting up"
	I0203 12:28:36.939258   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[651]: time="2025-02-03T12:26:45.381865516Z" level=info msg="containerd not running, starting managed containerd"
	I0203 12:28:36.939258   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[651]: time="2025-02-03T12:26:45.382773073Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=657
	I0203 12:28:36.939325   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.412550323Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0203 12:28:36.939325   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440135738Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0203 12:28:36.939325   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440206542Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0203 12:28:36.939395   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440329250Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0203 12:28:36.939395   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440352551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.939459   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441207804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:36.939459   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441394816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.939524   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441695635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:36.939524   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441819442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.939524   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441843144Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:36.939590   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441855545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.939590   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.442535887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.939590   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.443428142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.939655   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.446651543Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:36.939655   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.446752549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.939725   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.446913259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:36.939725   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.447005465Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0203 12:28:36.939789   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.447482194Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0203 12:28:36.939789   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.447592401Z" level=info msg="metadata content store policy set" policy=shared
	I0203 12:28:36.939789   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452471104Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0203 12:28:36.939789   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452580211Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0203 12:28:36.939883   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452605613Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0203 12:28:36.939883   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452624714Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0203 12:28:36.939883   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452641915Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0203 12:28:36.939950   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452717520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0203 12:28:36.939950   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453010238Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0203 12:28:36.939950   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453128145Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0203 12:28:36.939950   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453147046Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0203 12:28:36.940016   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453162147Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0203 12:28:36.940016   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453177448Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.940016   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453199850Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.940079   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453215851Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.940079   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453237552Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.940079   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453360460Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.940137   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453415663Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.940137   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453522870Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.940137   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453541271Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.940137   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453563972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940203   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453580773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940203   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453596174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940203   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453611675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940278   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453625276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940278   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453640377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940278   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453653878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940337   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453667779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940337   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453687080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940337   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453703481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940402   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453716682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940402   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453729883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940402   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453743884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940462   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453761485Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0203 12:28:36.940462   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453785086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940462   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453804587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940526   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453818788Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0203 12:28:36.940526   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453867591Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0203 12:28:36.940586   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453971798Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0203 12:28:36.940586   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454021201Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0203 12:28:36.940586   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454132008Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0203 12:28:36.940651   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454147409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940712   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454163610Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0203 12:28:36.940712   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454175210Z" level=info msg="NRI interface is disabled by configuration."
	I0203 12:28:36.940712   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454622938Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0203 12:28:36.940712   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454857953Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0203 12:28:36.940775   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454980660Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0203 12:28:36.940775   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.455105168Z" level=info msg="containerd successfully booted in 0.044680s"
	I0203 12:28:36.940775   13136 command_runner.go:130] > Feb 03 12:26:46 multinode-749300 dockerd[651]: time="2025-02-03T12:26:46.439313185Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0203 12:28:36.940775   13136 command_runner.go:130] > Feb 03 12:26:46 multinode-749300 dockerd[651]: time="2025-02-03T12:26:46.630975852Z" level=info msg="Loading containers: start."
	I0203 12:28:36.940867   13136 command_runner.go:130] > Feb 03 12:26:46 multinode-749300 dockerd[651]: time="2025-02-03T12:26:46.949194693Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0203 12:28:36.940867   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.095120348Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0203 12:28:36.940931   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.212617937Z" level=info msg="Loading containers: done."
	I0203 12:28:36.940931   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.238410035Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0203 12:28:36.940931   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.238496541Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0203 12:28:36.940931   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.238529943Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0203 12:28:36.940993   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.239396503Z" level=info msg="Daemon has completed initialization"
	I0203 12:28:36.940993   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.279910027Z" level=info msg="API listen on /var/run/docker.sock"
	I0203 12:28:36.940993   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 systemd[1]: Started Docker Application Container Engine.
	I0203 12:28:36.940993   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.280075738Z" level=info msg="API listen on [::]:2376"
	I0203 12:28:36.941058   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.298017161Z" level=info msg="Processing signal 'terminated'"
	I0203 12:28:36.941058   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 systemd[1]: Stopping Docker Application Container Engine...
	I0203 12:28:36.941120   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.300466075Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0203 12:28:36.941120   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.301181479Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0203 12:28:36.941120   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.301265080Z" level=info msg="Daemon shutdown complete"
	I0203 12:28:36.941120   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.301434281Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0203 12:28:36.941186   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 systemd[1]: docker.service: Deactivated successfully.
	I0203 12:28:36.941186   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 systemd[1]: Stopped Docker Application Container Engine.
	I0203 12:28:36.941186   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 systemd[1]: Starting Docker Application Container Engine...
	I0203 12:28:36.941246   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:12.352956833Z" level=info msg="Starting up"
	I0203 12:28:36.941246   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:12.353893039Z" level=info msg="containerd not running, starting managed containerd"
	I0203 12:28:36.941246   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:12.356231552Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1107
	I0203 12:28:36.941312   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.387763834Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0203 12:28:36.941312   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415379693Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0203 12:28:36.941312   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415427893Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0203 12:28:36.941374   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415503993Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0203 12:28:36.941374   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415521293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.941374   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415552594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:36.941439   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415571594Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.941439   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415753695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:36.941505   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415875095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.941505   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415895996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:36.941505   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415907496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.941576   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415998596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.941576   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.416122597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.941576   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419383016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:36.941637   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419448316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.941701   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419602317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:36.941701   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419703417Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0203 12:28:36.941701   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419732118Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0203 12:28:36.941701   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419761418Z" level=info msg="metadata content store policy set" policy=shared
	I0203 12:28:36.941773   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420025019Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0203 12:28:36.941773   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420117020Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0203 12:28:36.941773   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420135220Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0203 12:28:36.941773   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420150320Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0203 12:28:36.941861   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420168320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0203 12:28:36.941879   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420220020Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0203 12:28:36.941879   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420554522Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0203 12:28:36.941879   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420715123Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0203 12:28:36.941945   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420811824Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0203 12:28:36.941945   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420833624Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0203 12:28:36.941945   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420853524Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.942028   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420879824Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.942057   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420897724Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.942093   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420912624Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.942117   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420991825Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.942117   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421007125Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.942117   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421021725Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.942199   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421034325Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.942199   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421059025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942226   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421075725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942262   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421090525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421104726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421118126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421132126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421150126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421166226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421188326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421206126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421218626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421231326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421244126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421262126Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421286927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421299927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421316127Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421657629Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421699929Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421719729Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421738629Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421749929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421767729Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421781429Z" level=info msg="NRI interface is disabled by configuration."
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422100631Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422251132Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422392333Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422418033Z" level=info msg="containerd successfully booted in 0.035603s"
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.403475080Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.431623642Z" level=info msg="Loading containers: start."
	I0203 12:28:36.942837   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.675130644Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0203 12:28:36.942837   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.788922499Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0203 12:28:36.942837   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.907280980Z" level=info msg="Loading containers: done."
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.932910027Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.932994128Z" level=info msg="Daemon has completed initialization"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.970542044Z" level=info msg="API listen on /var/run/docker.sock"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.970691945Z" level=info msg="API listen on [::]:2376"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 systemd[1]: Started Docker Application Container Engine.
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Loaded network plugin cni"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Start cri-dockerd grpc backend"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:19Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-58667487b6-zgvmd_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"efcd217a3204d8ee4b03ebb412109a32b1b008fc65b7434e2087e8fa5429c03b\""
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:19Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-v2gkp_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"26e5557dc32ce42e41eb095169017d71cd452b2e90ecede8972ab6dfa8c841ac\""
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.731892062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.732069764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.732104064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.732632967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.742524924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.742776225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.742902026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.743145327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787449782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.943460   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787596483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.943460   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787637083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787820284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818198959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818289160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818451361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818555561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/264f9c1c2c05f544f10a0af503e7dfb16c8eaf7dab55a12d747c05df02b07807/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d8732fe7d2435b888ee9c1bdc8f366b2cd23fe7a47230b5e0b7e6e97547fb30e/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e2da6b5a5bd1b22ed0d0ef9ab7fd9a0874f1357443511e898b07fbae5f28d3d0/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc833a943f11f228aa4ef7daceca6bf4fd4096e22ee6354cc8afb177b0dc3db5/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.377130176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.378256483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.378462184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.378972087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.423087341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.424963652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.426916563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.427886269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.440196639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.440916544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.442061550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.442305352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.453876818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.454104020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.454340021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.454632323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:25Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0203 12:28:36.944061   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474743418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.944061   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474833119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.944061   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474852519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944131   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474952220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944131   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502675379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.944131   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502746480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.944131   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502760180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502846980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507587807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507657108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507682008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507809209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c4912e7d3383ee7e383387115cfa625509cdb8edff08db473311607d723e4d67/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1eece224f54eb90d32ca17e53dec80b8ad8db63a733127cae7ce39832c944127/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c682ff8834bf472070d7ef8557ee1391dcfffd86e9b6a29c668eee4fe700e342/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010215801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010492502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010590603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010742104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.013544220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.013678021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.013710621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.014126823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145033877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145181177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145225278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145314878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:57.589562586Z" level=info msg="ignoring event" container=edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:57.590947498Z" level=info msg="shim disconnected" id=edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578 namespace=moby
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:57.591492803Z" level=warning msg="cleaning up after shim disconnected" id=edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578 namespace=moby
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:57.591599004Z" level=info msg="cleaning up dead shim" namespace=moby
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.013597299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.013673700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.013692300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.014212603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223402731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223571532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223587232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223671032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.236644911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.237659918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.237678218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.238007320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:28:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d290c79ddbf8dbaaae0ac6ae29ff1695c351eb244341bb86dfa66bd51e407af5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:28:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ac5f0bf5197cf2f2f9c600a6d9f77ea7775ba4c80a3a3c30272ea8dc42d9f4e2/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.741947665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.742072666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.742088066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.945091   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.742520068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.945091   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783254697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.945091   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783521498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.945091   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783775700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.945091   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783932101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.973089   13136 logs.go:123] Gathering logs for kube-apiserver [6c19e0a0ba9c] ...
	I0203 12:28:36.973089   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c19e0a0ba9c"
	I0203 12:28:37.004489   13136 command_runner.go:130] ! W0203 12:27:22.209566       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0203 12:28:37.004489   13136 command_runner.go:130] ! I0203 12:27:22.212385       1 options.go:238] external host was not specified, using 172.25.12.244
	I0203 12:28:37.004489   13136 command_runner.go:130] ! I0203 12:27:22.215411       1 server.go:143] Version: v1.32.1
	I0203 12:28:37.004489   13136 command_runner.go:130] ! I0203 12:27:22.215519       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:37.004489   13136 command_runner.go:130] ! I0203 12:27:22.961695       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0203 12:28:37.004489   13136 command_runner.go:130] ! I0203 12:27:22.981400       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0203 12:28:37.004489   13136 command_runner.go:130] ! I0203 12:27:22.991076       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0203 12:28:37.004489   13136 command_runner.go:130] ! I0203 12:27:22.991179       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0203 12:28:37.004489   13136 command_runner.go:130] ! I0203 12:27:22.995374       1 instance.go:233] Using reconciler: lease
	I0203 12:28:37.005010   13136 command_runner.go:130] ! I0203 12:27:23.455051       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0203 12:28:37.005051   13136 command_runner.go:130] ! W0203 12:27:23.455431       1 genericapiserver.go:767] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005051   13136 command_runner.go:130] ! I0203 12:27:23.772863       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0203 12:28:37.005051   13136 command_runner.go:130] ! I0203 12:27:23.773118       1 apis.go:106] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.011206       1 apis.go:106] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.156938       1 apis.go:106] API group "resource.k8s.io" is not enabled, skipping.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.167831       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.167952       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.167965       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.168630       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.168731       1 genericapiserver.go:767] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.169810       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.170800       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.170918       1 genericapiserver.go:767] Skipping API autoscaling/v2beta1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.170928       1 genericapiserver.go:767] Skipping API autoscaling/v2beta2 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.172706       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.172818       1 genericapiserver.go:767] Skipping API batch/v1beta1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.173842       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.173955       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.173976       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.174699       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.174807       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.174815       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1alpha2 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.175562       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.175675       1 genericapiserver.go:767] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.177712       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.177817       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.177827       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.178337       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.178525       1 genericapiserver.go:767] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.178534       1 genericapiserver.go:767] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.179521       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.179622       1 genericapiserver.go:767] Skipping API policy/v1beta1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.181744       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0203 12:28:37.005622   13136 command_runner.go:130] ! W0203 12:27:24.181838       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005622   13136 command_runner.go:130] ! W0203 12:27:24.181848       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:37.005663   13136 command_runner.go:130] ! I0203 12:27:24.182574       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0203 12:28:37.005663   13136 command_runner.go:130] ! W0203 12:27:24.182612       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.182619       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.185237       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.185340       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.185438       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.187067       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.187189       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta3 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.187200       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.187204       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.193311       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.193504       1 genericapiserver.go:767] Skipping API apps/v1beta2 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.193516       1 genericapiserver.go:767] Skipping API apps/v1beta1 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.195828       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.195943       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.195952       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.196821       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.196925       1 genericapiserver.go:767] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.210087       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.210106       1 genericapiserver.go:767] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.794572       1 secure_serving.go:213] Serving securely on [::]:8443
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.794794       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.795068       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.795407       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.802046       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.802388       1 local_available_controller.go:156] Starting LocalAvailability controller
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.802453       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.803591       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.803646       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.803948       1 controller.go:78] Starting OpenAPI AggregationController
	I0203 12:28:37.006221   13136 command_runner.go:130] ! I0203 12:27:24.804549       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0203 12:28:37.006221   13136 command_runner.go:130] ! I0203 12:27:24.805072       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.805137       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.805149       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.805622       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.805888       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.806059       1 aggregator.go:169] waiting for initial CRD sync...
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.806071       1 cluster_authentication_trust_controller.go:462] Starting cluster_authentication_trust_controller controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.806336       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.815482       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.815778       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.857328       1 controller.go:142] Starting OpenAPI controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.857674       1 controller.go:90] Starting OpenAPI V3 controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.857889       1 naming_controller.go:294] Starting NamingConditionController
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.858090       1 establishing_controller.go:81] Starting EstablishingController
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.858264       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.858511       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.858696       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.805624       1 controller.go:119] Starting legacy_token_tracking_controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.859559       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.859779       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.859901       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.805642       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.805842       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.960247       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.962958       1 aggregator.go:171] initial CRD sync complete...
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.963020       1 autoregister_controller.go:144] Starting autoregister controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.963034       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.983465       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.983682       1 policy_source.go:240] refreshing policies
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.988524       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:25.002635       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:25.006114       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0203 12:28:37.006787   13136 command_runner.go:130] ! I0203 12:27:25.007504       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0203 12:28:37.006787   13136 command_runner.go:130] ! I0203 12:27:25.021232       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0203 12:28:37.006836   13136 command_runner.go:130] ! I0203 12:27:25.021549       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0203 12:28:37.006836   13136 command_runner.go:130] ! I0203 12:27:25.021784       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0203 12:28:37.006836   13136 command_runner.go:130] ! I0203 12:27:25.040252       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0203 12:28:37.006836   13136 command_runner.go:130] ! I0203 12:27:25.063391       1 cache.go:39] Caches are synced for autoregister controller
	I0203 12:28:37.006836   13136 command_runner.go:130] ! I0203 12:27:25.063942       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0203 12:28:37.006905   13136 command_runner.go:130] ! I0203 12:27:25.064322       1 shared_informer.go:320] Caches are synced for configmaps
	I0203 12:28:37.006905   13136 command_runner.go:130] ! I0203 12:27:25.809340       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0203 12:28:37.006905   13136 command_runner.go:130] ! I0203 12:27:25.881836       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0203 12:28:37.006905   13136 command_runner.go:130] ! W0203 12:27:26.443758       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.25.12.244]
	I0203 12:28:37.006970   13136 command_runner.go:130] ! I0203 12:27:26.447833       1 controller.go:615] quota admission added evaluator for: endpoints
	I0203 12:28:37.006970   13136 command_runner.go:130] ! I0203 12:27:26.461396       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0203 12:28:37.006970   13136 command_runner.go:130] ! I0203 12:27:27.972522       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0203 12:28:37.007031   13136 command_runner.go:130] ! I0203 12:27:28.290141       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0203 12:28:37.007053   13136 command_runner.go:130] ! I0203 12:27:28.509424       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0203 12:28:37.007098   13136 command_runner.go:130] ! I0203 12:27:28.520726       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0203 12:28:37.007128   13136 command_runner.go:130] ! I0203 12:27:28.561004       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0203 12:28:37.015904   13136 logs.go:123] Gathering logs for etcd [09707a862965] ...
	I0203 12:28:37.015904   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09707a862965"
	I0203 12:28:37.043555   13136 command_runner.go:130] ! {"level":"warn","ts":"2025-02-03T12:27:21.807150Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0203 12:28:37.043992   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.807376Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.25.12.244:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.25.12.244:2380","--initial-cluster=multinode-749300=https://172.25.12.244:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.25.12.244:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.25.12.244:2380","--name=multinode-749300","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0203 12:28:37.043992   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.810076Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0203 12:28:37.043992   13136 command_runner.go:130] ! {"level":"warn","ts":"2025-02-03T12:27:21.810110Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0203 12:28:37.044122   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.810121Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.25.12.244:2380"]}
	I0203 12:28:37.044142   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.810165Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0203 12:28:37.044142   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.813162Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.25.12.244:2379"]}
	I0203 12:28:37.044243   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.815738Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-749300","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.25.12.244:2380"],"listen-peer-urls":["https://172.25.12.244:2380"],"advertise-client-urls":["https://172.25.12.244:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.12.244:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-c
luster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0203 12:28:37.044243   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.836502Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"19.618913ms"}
	I0203 12:28:37.044318   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.860600Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0203 12:28:37.044318   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.876663Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"bd3b09816c9d03a4","local-member-id":"aee9b6e79987349e","commit-index":2011}
	I0203 12:28:37.044318   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.879122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e switched to configuration voters=()"}
	I0203 12:28:37.044389   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.881202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became follower at term 2"}
	I0203 12:28:37.044389   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.882322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aee9b6e79987349e [peers: [], term: 2, commit: 2011, applied: 0, lastindex: 2011, lastterm: 2]"}
	I0203 12:28:37.044389   13136 command_runner.go:130] ! {"level":"warn","ts":"2025-02-03T12:27:21.896121Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0203 12:28:37.044455   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.900153Z","caller":"mvcc/kvstore.go:346","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1395}
	I0203 12:28:37.044455   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.903670Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":1746}
	I0203 12:28:37.044455   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.910428Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0203 12:28:37.044455   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.919884Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"aee9b6e79987349e","timeout":"7s"}
	I0203 12:28:37.044553   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.920678Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"aee9b6e79987349e"}
	I0203 12:28:37.044553   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.922572Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"aee9b6e79987349e","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	I0203 12:28:37.044553   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.923543Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	I0203 12:28:37.044619   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924198Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0203 12:28:37.044619   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924288Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0203 12:28:37.044619   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924338Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0203 12:28:37.044686   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e switched to configuration voters=(12603806138002519198)"}
	I0203 12:28:37.044686   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.925111Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bd3b09816c9d03a4","local-member-id":"aee9b6e79987349e","added-peer-id":"aee9b6e79987349e","added-peer-peer-urls":["https://172.25.1.53:2380"]}
	I0203 12:28:37.044686   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.926083Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bd3b09816c9d03a4","local-member-id":"aee9b6e79987349e","cluster-version":"3.5"}
	I0203 12:28:37.044686   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.926140Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0203 12:28:37.044757   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.926075Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0203 12:28:37.044824   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.931282Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.932289Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.25.12.244:2380"}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.932461Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.25.12.244:2380"}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.932990Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aee9b6e79987349e","initial-advertise-peer-urls":["https://172.25.12.244:2380"],"listen-peer-urls":["https://172.25.12.244:2380"],"advertise-client-urls":["https://172.25.12.244:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.12.244:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.933175Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e is starting a new election at term 2"}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became pre-candidate at term 2"}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e received MsgPreVoteResp from aee9b6e79987349e at term 2"}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became candidate at term 3"}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e received MsgVoteResp from aee9b6e79987349e at term 3"}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became leader at term 3"}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aee9b6e79987349e elected leader aee9b6e79987349e at term 3"}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.298589Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aee9b6e79987349e","local-member-attributes":"{Name:multinode-749300 ClientURLs:[https://172.25.12.244:2379]}","request-path":"/0/members/aee9b6e79987349e/attributes","cluster-id":"bd3b09816c9d03a4","publish-timeout":"7s"}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.298815Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0203 12:28:37.045474   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.299061Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0203 12:28:37.045474   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.301663Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0203 12:28:37.045528   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.301847Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0203 12:28:37.045591   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.306842Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0203 12:28:37.045617   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.310094Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0203 12:28:37.045696   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.312993Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0203 12:28:37.046526   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.319087Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.12.244:2379"}
	I0203 12:28:37.054724   13136 logs.go:123] Gathering logs for coredns [fe91a8d012ae] ...
	I0203 12:28:37.055243   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe91a8d012ae"
	I0203 12:28:37.088930   13136 command_runner.go:130] > .:53
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3e8130cfa8e96169e54fdb81903f9b4680c96074b93281de316a617894d613269c265db78cbf1be00f04df6f27627d689838921ad115c7f1fadc26b632a43f17
	I0203 12:28:37.089005   13136 command_runner.go:130] > CoreDNS-1.11.3
	I0203 12:28:37.089005   13136 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 127.0.0.1:49376 - 54533 "HINFO IN 5545318737342419956.4498205497283969299. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.271697251s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:43143 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000594006s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:44943 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.183348242s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:36646 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.156236585s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:58135 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.085964402s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:55647 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000429704s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:43653 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000173402s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:39125 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000093801s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:43285 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000234602s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:49861 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157602s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:59079 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024886436s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:56014 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155402s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:49501 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115101s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:59809 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.029540479s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:45190 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184901s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:58561 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000207002s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:54547 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108101s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:52767 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140901s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:48199 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000275502s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:40769 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194202s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:56613 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000241303s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:36390 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000127501s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:49253 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150501s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:53291 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115601s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:37098 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000782s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:47927 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154002s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:49855 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156202s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:51176 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114201s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:45626 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156701s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:33142 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141402s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:36637 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000249602s
	I0203 12:28:37.089526   13136 command_runner.go:130] > [INFO] 10.244.0.3:34293 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135301s
	I0203 12:28:37.089566   13136 command_runner.go:130] > [INFO] 10.244.0.3:59245 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112701s
	I0203 12:28:37.089601   13136 command_runner.go:130] > [INFO] 10.244.1.2:56139 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200702s
	I0203 12:28:37.089601   13136 command_runner.go:130] > [INFO] 10.244.1.2:53567 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131301s
	I0203 12:28:37.089601   13136 command_runner.go:130] > [INFO] 10.244.1.2:55778 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000182502s
	I0203 12:28:37.089601   13136 command_runner.go:130] > [INFO] 10.244.1.2:53486 - 5 "PTR IN 1.0.25.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000163702s
	I0203 12:28:37.089601   13136 command_runner.go:130] > [INFO] 10.244.0.3:52745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191702s
	I0203 12:28:37.089601   13136 command_runner.go:130] > [INFO] 10.244.0.3:38587 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132301s
	I0203 12:28:37.089601   13136 command_runner.go:130] > [INFO] 10.244.0.3:53685 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078101s
	I0203 12:28:37.089601   13136 command_runner.go:130] > [INFO] 10.244.0.3:38406 - 5 "PTR IN 1.0.25.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000076301s
	I0203 12:28:37.089601   13136 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0203 12:28:37.089601   13136 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0203 12:28:37.092821   13136 logs.go:123] Gathering logs for kubelet ...
	I0203 12:28:37.092821   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:15 multinode-749300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: I0203 12:27:16.085338    1502 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: I0203 12:27:16.085444    1502 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: I0203 12:27:16.086383    1502 server.go:954] "Client rotation is on, will bootstrap in background"
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: E0203 12:27:16.086828    1502 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: I0203 12:27:16.848200    1552 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: I0203 12:27:16.848394    1552 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: I0203 12:27:16.848741    1552 server.go:954] "Client rotation is on, will bootstrap in background"
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: E0203 12:27:16.848794    1552 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:17 multinode-749300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.655843    1646 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.655920    1646 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.656491    1646 server.go:954] "Client rotation is on, will bootstrap in background"
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.660314    1646 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.685411    1646 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:37.127052   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.712367    1646 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.712421    1646 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.719067    1646 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.719190    1646 server.go:841] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720010    1646 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720060    1646 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-749300","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720250    1646 topology_manager.go:138] "Creating topology manager with none policy"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720261    1646 container_manager_linux.go:304] "Creating device plugin manager"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720394    1646 state_mem.go:36] "Initialized new in-memory state store"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722746    1646 kubelet.go:446] "Attempting to sync node with API server"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722858    1646 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722878    1646 kubelet.go:352] "Adding apiserver pod source"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722889    1646 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.728476    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.728558    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.730384    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.730414    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.730516    1646 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="docker" version="27.4.0" apiVersion="v1"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.732095    1646 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.732504    1646 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.737572    1646 watchdog_linux.go:99] "Systemd watchdog is not enabled"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.737778    1646 server.go:1287] "Started kubelet"
	I0203 12:28:37.127623   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.742490    1646 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.747263    1646 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.25.12.244:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-749300.1820b26d8c29f858  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-749300,UID:multinode-749300,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-749300,},FirstTimestamp:2025-02-03 12:27:19.73775164 +0000 UTC m=+0.175845113,LastTimestamp:2025-02-03 12:27:19.73775164 +0000 UTC m=+0.175845113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-7493
00,}"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.753450    1646 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.755438    1646 server.go:490] "Adding debug handlers to kubelet server"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.757330    1646 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.759063    1646 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.759618    1646 volume_manager.go:297] "Starting Kubelet Volume Manager"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.760084    1646 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.760301    1646 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-749300\" not found"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.763820    1646 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.766190    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="200ms"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.775750    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.775896    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.776304    1646 factory.go:221] Registration of the systemd container factory successfully
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.776423    1646 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.776477    1646 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.822393    1646 cpu_manager.go:221] "Starting CPU manager" policy="none"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.822414    1646 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.822433    1646 state_mem.go:36] "Initialized new in-memory state store"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823729    1646 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823782    1646 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823807    1646 policy_none.go:49] "None policy: Start"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823820    1646 memory_manager.go:186] "Starting memorymanager" policy="None"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823833    1646 state_mem.go:35] "Initializing new in-memory state store"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.824575    1646 state_mem.go:75] "Updated machine memory state"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.827550    1646 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0203 12:28:37.128184   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.828214    1646 eviction_manager.go:189] "Eviction manager: starting control loop"
	I0203 12:28:37.128226   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.828323    1646 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0203 12:28:37.128226   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.834439    1646 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0203 12:28:37.128270   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.836223    1646 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I0203 12:28:37.128270   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.836276    1646 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-749300\" not found"
	I0203 12:28:37.128307   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.839763    1646 reconciler.go:26] "Reconciler: start to sync state"
	I0203 12:28:37.128307   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.849152    1646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0203 12:28:37.128351   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.851786    1646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0203 12:28:37.128351   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.851873    1646 status_manager.go:227] "Starting to sync pod status with apiserver"
	I0203 12:28:37.128389   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.852167    1646 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I0203 12:28:37.128422   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.852266    1646 kubelet.go:2388] "Starting kubelet main sync loop"
	I0203 12:28:37.128460   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.852425    1646 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0203 12:28:37.128532   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.857733    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:37.128566   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.857872    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.865017    1646 iptables.go:577] "Could not set up iptables canary" err=<
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.930098    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.931495    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.959594    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.959988    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ff01fa7d8c67a792cac128e6be46aba4b9713e4a6cd005178a2573c7a847c7a"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965523    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1b473818438dbd2e6a91783e24fae500384dbe88b88a3ed9dd8d9c8f4724a7a"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965561    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16d03cfd685dc52d880c67a5a5040dfd6dcf7d2477c368b0b221099fe19d0fc3"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965576    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8d9e598659ff21f0255dbdf0fe1e487760842b470492b0b4377fb2491bf3f17"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965587    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3c93fcfaa46c30cca46747853d168923992fa34e3ab48bd74f55818221180a9"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.966435    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.969099    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="400ms"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.969271    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efcd217a3204d8ee4b03ebb412109a32b1b008fc65b7434e2087e8fa5429c03b"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.994181    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26e5557dc32ce42e41eb095169017d71cd452b2e90ecede8972ab6dfa8c841ac"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.008325    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a166f3c8776d2abb8f173e76ba48d9aa5c71b04d34638145a7d22b947e0b1e16"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.024782    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb49b32ba0852c35cd9bd014b8dc9ccfc93a2c6a7d911bdd6baaba575c4e1d80"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.026552    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.027031    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046040    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-kubeconfig\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046195    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:37.129129   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046258    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a4dc8a8db691940bb17375ec22c0921e-kubeconfig\") pod \"kube-scheduler-multinode-749300\" (UID: \"a4dc8a8db691940bb17375ec22c0921e\") " pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:37.129168   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046319    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/f85eb916773a482447e41aa40aaff233-etcd-certs\") pod \"etcd-multinode-749300\" (UID: \"f85eb916773a482447e41aa40aaff233\") " pod="kube-system/etcd-multinode-749300"
	I0203 12:28:37.129211   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046369    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20275825c8d44051c01f8d920b297acd-ca-certs\") pod \"kube-apiserver-multinode-749300\" (UID: \"20275825c8d44051c01f8d920b297acd\") " pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:37.129249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046389    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20275825c8d44051c01f8d920b297acd-k8s-certs\") pod \"kube-apiserver-multinode-749300\" (UID: \"20275825c8d44051c01f8d920b297acd\") " pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:37.129320   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046407    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20275825c8d44051c01f8d920b297acd-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-749300\" (UID: \"20275825c8d44051c01f8d920b297acd\") " pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:37.129365   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046425    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-ca-certs\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:37.129404   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046445    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/f85eb916773a482447e41aa40aaff233-etcd-data\") pod \"etcd-multinode-749300\" (UID: \"f85eb916773a482447e41aa40aaff233\") " pod="kube-system/etcd-multinode-749300"
	I0203 12:28:37.129438   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046466    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-flexvolume-dir\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:37.129497   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046483    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-k8s-certs\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:37.129524   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.134568    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:37.129524   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.136458    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:37.129524   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.371298    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="800ms"
	I0203 12:28:37.129616   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.537888    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.538850    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: W0203 12:27:20.642530    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.642673    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: W0203 12:27:20.718728    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.718775    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: W0203 12:27:20.727487    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.727666    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: I0203 12:27:21.096615    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2da6b5a5bd1b22ed0d0ef9ab7fd9a0874f1357443511e898b07fbae5f28d3d0"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: I0203 12:27:21.117402    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc833a943f11f228aa4ef7daceca6bf4fd4096e22ee6354cc8afb177b0dc3db5"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: E0203 12:27:21.172766    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="1.6s"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: W0203 12:27:21.239099    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: E0203 12:27:21.239402    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: I0203 12:27:21.341008    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: E0203 12:27:21.342386    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.155943    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.168589    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.184520    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.130216   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.192380    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.130256   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: I0203 12:27:22.944384    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:37.130256   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.220031    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.130307   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.221067    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.130307   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.221592    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.130343   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.222217    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.130343   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: E0203 12:27:24.222471    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.130406   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: E0203 12:27:24.222938    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.130451   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: E0203 12:27:24.223334    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.130451   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: I0203 12:27:24.962104    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:37.130500   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.072863    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-multinode-749300\" already exists" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:37.130500   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.072916    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:37.130500   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.096600    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-multinode-749300\" already exists" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:37.130500   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.096649    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:37.130577   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.100835    1646 kubelet_node_status.go:125] "Node was previously registered" node="multinode-749300"
	I0203 12:28:37.130577   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.101001    1646 kubelet_node_status.go:79] "Successfully registered node" node="multinode-749300"
	I0203 12:28:37.130577   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.101046    1646 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0203 12:28:37.130577   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.102196    1646 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0203 12:28:37.130650   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.103579    1646 setters.go:602] "Node became not ready" node="multinode-749300" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-03T12:27:25Z","lastTransitionTime":"2025-02-03T12:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0203 12:28:37.130650   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.123635    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-multinode-749300\" already exists" pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:37.130650   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.123696    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:37.130755   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.143136    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-749300\" already exists" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:37.130755   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.231645    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:37.130755   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.250920    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-749300\" already exists" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:37.130755   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.733100    1646 apiserver.go:52] "Watching apiserver"
	I0203 12:28:37.130755   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.740335    1646 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-749300" podUID="b18ba461-b225-4090-8341-159171502b52"
	I0203 12:28:37.130842   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.740880    1646 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-749300" podUID="c751851c-68ee-4c15-80ca-32642fcf2a5a"
	I0203 12:28:37.130842   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.741767    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.130919   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.743201    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.130919   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.768020    1646 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0203 12:28:37.130980   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.798228    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67c155d5-fb9b-42f5-8e64-865c44a5d4e6-xtables-lock\") pod \"kindnet-h6m57\" (UID: \"67c155d5-fb9b-42f5-8e64-865c44a5d4e6\") " pod="kube-system/kindnet-h6m57"
	I0203 12:28:37.130980   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799102    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4c991afa-7bb0-4d52-bded-22d68037b5ae-tmp\") pod \"storage-provisioner\" (UID: \"4c991afa-7bb0-4d52-bded-22d68037b5ae\") " pod="kube-system/storage-provisioner"
	I0203 12:28:37.131041   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799171    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1709b874-4fee-41f5-8d30-24912b2fa725-xtables-lock\") pod \"kube-proxy-9g92t\" (UID: \"1709b874-4fee-41f5-8d30-24912b2fa725\") " pod="kube-system/kube-proxy-9g92t"
	I0203 12:28:37.131105   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799205    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1709b874-4fee-41f5-8d30-24912b2fa725-lib-modules\") pod \"kube-proxy-9g92t\" (UID: \"1709b874-4fee-41f5-8d30-24912b2fa725\") " pod="kube-system/kube-proxy-9g92t"
	I0203 12:28:37.131105   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799246    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/67c155d5-fb9b-42f5-8e64-865c44a5d4e6-cni-cfg\") pod \"kindnet-h6m57\" (UID: \"67c155d5-fb9b-42f5-8e64-865c44a5d4e6\") " pod="kube-system/kindnet-h6m57"
	I0203 12:28:37.131190   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799264    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67c155d5-fb9b-42f5-8e64-865c44a5d4e6-lib-modules\") pod \"kindnet-h6m57\" (UID: \"67c155d5-fb9b-42f5-8e64-865c44a5d4e6\") " pod="kube-system/kindnet-h6m57"
	I0203 12:28:37.131190   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799337    1646 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:37.131190   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799426    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:37.131190   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.799386    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:37.131291   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.800808    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:26.300655438 +0000 UTC m=+6.738748911 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.812299    1646 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.812369    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.843057    1646 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.862699    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.862730    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.862793    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:26.362774296 +0000 UTC m=+6.800867869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.898492    1646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8703dd831250f30e213efd5fca131d7" path="/var/lib/kubelet/pods/a8703dd831250f30e213efd5fca131d7/volumes"
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.899802    1646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cea8016677ee73c66077ce584fb15354" path="/var/lib/kubelet/pods/cea8016677ee73c66077ce584fb15354/volumes"
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.952875    1646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-749300" podStartSLOduration=0.952857614 podStartE2EDuration="952.857614ms" podCreationTimestamp="2025-02-03 12:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-03 12:27:25.937443526 +0000 UTC m=+6.375537099" watchObservedRunningTime="2025-02-03 12:27:25.952857614 +0000 UTC m=+6.390951187"
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.974229    1646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-749300" podStartSLOduration=0.974210637 podStartE2EDuration="974.210637ms" podCreationTimestamp="2025-02-03 12:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-03 12:27:25.953477018 +0000 UTC m=+6.391570591" watchObservedRunningTime="2025-02-03 12:27:25.974210637 +0000 UTC m=+6.412304110"
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.303818    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.303893    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:27.303876335 +0000 UTC m=+7.741969908 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.405407    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.405530    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.405596    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:27.40557752 +0000 UTC m=+7.843670993 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.315813    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.317831    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:29.317806871 +0000 UTC m=+9.755900344 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:37.131847   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.416628    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.416661    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.416713    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:29.41669654 +0000 UTC m=+9.854790013 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.861806    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.862570    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.336385    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.336563    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:33.336541991 +0000 UTC m=+13.774635464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.437576    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.437923    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.438074    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:33.438050975 +0000 UTC m=+13.876144448 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.853969    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.853720    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:31 multinode-749300 kubelet[1646]: E0203 12:27:31.852706    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:31 multinode-749300 kubelet[1646]: E0203 12:27:31.853391    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.369187    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:37.132449   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.369409    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:41.369390703 +0000 UTC m=+21.807484276 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:37.132483   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.470103    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.470221    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.470291    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:41.470271952 +0000 UTC m=+21.908365425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.853533    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.854435    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:35 multinode-749300 kubelet[1646]: E0203 12:27:35.853643    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:35 multinode-749300 kubelet[1646]: E0203 12:27:35.854148    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:37 multinode-749300 kubelet[1646]: E0203 12:27:37.852924    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:37 multinode-749300 kubelet[1646]: E0203 12:27:37.853434    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:39 multinode-749300 kubelet[1646]: E0203 12:27:39.861767    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:39 multinode-749300 kubelet[1646]: E0203 12:27:39.862616    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.448061    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.448222    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:57.44820293 +0000 UTC m=+37.886296403 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.549425    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.549465    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.549520    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:57.549504632 +0000 UTC m=+37.987598205 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.133045   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.852817    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.133123   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.853419    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:43 multinode-749300 kubelet[1646]: E0203 12:27:43.853585    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:43 multinode-749300 kubelet[1646]: E0203 12:27:43.854245    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:45 multinode-749300 kubelet[1646]: E0203 12:27:45.853520    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:45 multinode-749300 kubelet[1646]: E0203 12:27:45.857915    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:47 multinode-749300 kubelet[1646]: E0203 12:27:47.853864    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:47 multinode-749300 kubelet[1646]: E0203 12:27:47.854661    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:49 multinode-749300 kubelet[1646]: E0203 12:27:49.854481    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:49 multinode-749300 kubelet[1646]: E0203 12:27:49.855863    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:51 multinode-749300 kubelet[1646]: E0203 12:27:51.853472    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:51 multinode-749300 kubelet[1646]: E0203 12:27:51.854452    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:53 multinode-749300 kubelet[1646]: E0203 12:27:53.859668    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:53 multinode-749300 kubelet[1646]: E0203 12:27:53.860055    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:55 multinode-749300 kubelet[1646]: E0203 12:27:55.853633    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.133685   13136 command_runner.go:130] > Feb 03 12:27:55 multinode-749300 kubelet[1646]: E0203 12:27:55.854320    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.133685   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.494848    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:37.133685   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.494935    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:28:29.494917969 +0000 UTC m=+69.933011442 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:37.133788   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.595875    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.133811   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.595906    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.133870   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.595961    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:28:29.595942441 +0000 UTC m=+70.034036014 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.133870   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.853654    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.133946   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.854513    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.133946   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: I0203 12:27:57.906113    1646 scope.go:117] "RemoveContainer" containerID="a6484d4fc4d7f6ee26b1c4c1afc10f9bfba5b7f80f2181e9727f163daaf58ce6"
	I0203 12:28:37.133946   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: I0203 12:27:57.907138    1646 scope.go:117] "RemoveContainer" containerID="edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578"
	I0203 12:28:37.134019   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.910890    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(4c991afa-7bb0-4d52-bded-22d68037b5ae)\"" pod="kube-system/storage-provisioner" podUID="4c991afa-7bb0-4d52-bded-22d68037b5ae"
	I0203 12:28:37.134019   13136 command_runner.go:130] > Feb 03 12:27:59 multinode-749300 kubelet[1646]: E0203 12:27:59.855276    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.134088   13136 command_runner.go:130] > Feb 03 12:27:59 multinode-749300 kubelet[1646]: E0203 12:27:59.856164    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.134151   13136 command_runner.go:130] > Feb 03 12:28:01 multinode-749300 kubelet[1646]: E0203 12:28:01.853743    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.134151   13136 command_runner.go:130] > Feb 03 12:28:01 multinode-749300 kubelet[1646]: E0203 12:28:01.854049    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.134214   13136 command_runner.go:130] > Feb 03 12:28:03 multinode-749300 kubelet[1646]: E0203 12:28:03.853330    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.134280   13136 command_runner.go:130] > Feb 03 12:28:03 multinode-749300 kubelet[1646]: E0203 12:28:03.853968    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.134280   13136 command_runner.go:130] > Feb 03 12:28:05 multinode-749300 kubelet[1646]: E0203 12:28:05.853538    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.134349   13136 command_runner.go:130] > Feb 03 12:28:05 multinode-749300 kubelet[1646]: E0203 12:28:05.854181    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.134349   13136 command_runner.go:130] > Feb 03 12:28:07 multinode-749300 kubelet[1646]: E0203 12:28:07.853789    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.134426   13136 command_runner.go:130] > Feb 03 12:28:07 multinode-749300 kubelet[1646]: E0203 12:28:07.854093    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.134426   13136 command_runner.go:130] > Feb 03 12:28:09 multinode-749300 kubelet[1646]: E0203 12:28:09.860674    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.134491   13136 command_runner.go:130] > Feb 03 12:28:09 multinode-749300 kubelet[1646]: E0203 12:28:09.861267    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.134491   13136 command_runner.go:130] > Feb 03 12:28:10 multinode-749300 kubelet[1646]: I0203 12:28:10.015143    1646 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	I0203 12:28:37.134567   13136 command_runner.go:130] > Feb 03 12:28:10 multinode-749300 kubelet[1646]: I0203 12:28:10.852780    1646 scope.go:117] "RemoveContainer" containerID="edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578"
	I0203 12:28:37.134567   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]: I0203 12:28:19.875787    1646 scope.go:117] "RemoveContainer" containerID="ebc67da1b9e9ac10747758e3a934f19f5572ae8668d2a69f7d6ee1682387d02a"
	I0203 12:28:37.134567   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]: E0203 12:28:19.883953    1646 iptables.go:577] "Could not set up iptables canary" err=<
	I0203 12:28:37.134567   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0203 12:28:37.134635   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0203 12:28:37.134635   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0203 12:28:37.134635   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0203 12:28:37.134697   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]: I0203 12:28:19.923723    1646 scope.go:117] "RemoveContainer" containerID="e3efb81aa459abda7cc19b8607aa9d2bc56a837cc325e672683ffa4a9d05876b"
	I0203 12:28:37.134724   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 kubelet[1646]: I0203 12:28:30.439871    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d290c79ddbf8dbaaae0ac6ae29ff1695c351eb244341bb86dfa66bd51e407af5"
	I0203 12:28:37.134787   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 kubelet[1646]: I0203 12:28:30.451444    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac5f0bf5197cf2f2f9c600a6d9f77ea7775ba4c80a3a3c30272ea8dc42d9f4e2"
	I0203 12:28:37.180829   13136 logs.go:123] Gathering logs for describe nodes ...
	I0203 12:28:37.180829   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0203 12:28:37.386338   13136 command_runner.go:130] > Name:               multinode-749300
	I0203 12:28:37.386380   13136 command_runner.go:130] > Roles:              control-plane
	I0203 12:28:37.386433   13136 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0203 12:28:37.386433   13136 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0203 12:28:37.386433   13136 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0203 12:28:37.386474   13136 command_runner.go:130] >                     kubernetes.io/hostname=multinode-749300
	I0203 12:28:37.386474   13136 command_runner.go:130] >                     kubernetes.io/os=linux
	I0203 12:28:37.386474   13136 command_runner.go:130] >                     minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	I0203 12:28:37.386474   13136 command_runner.go:130] >                     minikube.k8s.io/name=multinode-749300
	I0203 12:28:37.386525   13136 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0203 12:28:37.386578   13136 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_03T12_04_56_0700
	I0203 12:28:37.386611   13136 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0203 12:28:37.386628   13136 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0203 12:28:37.386628   13136 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0203 12:28:37.386669   13136 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0203 12:28:37.386669   13136 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0203 12:28:37.386669   13136 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0203 12:28:37.386711   13136 command_runner.go:130] > CreationTimestamp:  Mon, 03 Feb 2025 12:04:52 +0000
	I0203 12:28:37.386711   13136 command_runner.go:130] > Taints:             <none>
	I0203 12:28:37.386711   13136 command_runner.go:130] > Unschedulable:      false
	I0203 12:28:37.386711   13136 command_runner.go:130] > Lease:
	I0203 12:28:37.386711   13136 command_runner.go:130] >   HolderIdentity:  multinode-749300
	I0203 12:28:37.386711   13136 command_runner.go:130] >   AcquireTime:     <unset>
	I0203 12:28:37.386711   13136 command_runner.go:130] >   RenewTime:       Mon, 03 Feb 2025 12:28:35 +0000
	I0203 12:28:37.386711   13136 command_runner.go:130] > Conditions:
	I0203 12:28:37.386805   13136 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0203 12:28:37.386844   13136 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0203 12:28:37.386844   13136 command_runner.go:130] >   MemoryPressure   False   Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0203 12:28:37.386903   13136 command_runner.go:130] >   DiskPressure     False   Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0203 12:28:37.386903   13136 command_runner.go:130] >   PIDPressure      False   Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0203 12:28:37.386957   13136 command_runner.go:130] >   Ready            True    Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:28:10 +0000   KubeletReady                 kubelet is posting ready status
	I0203 12:28:37.386957   13136 command_runner.go:130] > Addresses:
	I0203 12:28:37.387006   13136 command_runner.go:130] >   InternalIP:  172.25.12.244
	I0203 12:28:37.387006   13136 command_runner.go:130] >   Hostname:    multinode-749300
	I0203 12:28:37.387006   13136 command_runner.go:130] > Capacity:
	I0203 12:28:37.387052   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:37.387052   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:37.387052   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:37.387094   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:37.387094   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:37.387124   13136 command_runner.go:130] > Allocatable:
	I0203 12:28:37.387124   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:37.387124   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:37.387124   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:37.387181   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:37.387181   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:37.387181   13136 command_runner.go:130] > System Info:
	I0203 12:28:37.387215   13136 command_runner.go:130] >   Machine ID:                 aa9fbed762e844a2902d570b7040a1f0
	I0203 12:28:37.387215   13136 command_runner.go:130] >   System UUID:                69ffc0f0-a1d7-9e4e-97f3-ed54041f4203
	I0203 12:28:37.387215   13136 command_runner.go:130] >   Boot ID:                    d8bb3b39-ca1e-4113-9882-57d63502f9b2
	I0203 12:28:37.387215   13136 command_runner.go:130] >   Kernel Version:             5.10.207
	I0203 12:28:37.387215   13136 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0203 12:28:37.387294   13136 command_runner.go:130] >   Operating System:           linux
	I0203 12:28:37.387294   13136 command_runner.go:130] >   Architecture:               amd64
	I0203 12:28:37.387294   13136 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0203 12:28:37.387294   13136 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0203 12:28:37.387294   13136 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0203 12:28:37.387294   13136 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0203 12:28:37.387366   13136 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0203 12:28:37.387366   13136 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0203 12:28:37.387397   13136 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0203 12:28:37.387434   13136 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0203 12:28:37.387434   13136 command_runner.go:130] >   default                     busybox-58667487b6-zgvmd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0203 12:28:37.387478   13136 command_runner.go:130] >   kube-system                 coredns-668d6bf9bc-v2gkp                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0203 12:28:37.387478   13136 command_runner.go:130] >   kube-system                 etcd-multinode-749300                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         72s
	I0203 12:28:37.387527   13136 command_runner.go:130] >   kube-system                 kindnet-h6m57                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0203 12:28:37.387527   13136 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-749300             250m (12%)    0 (0%)      0 (0%)           0 (0%)         72s
	I0203 12:28:37.387580   13136 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-749300    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:37.387580   13136 command_runner.go:130] >   kube-system                 kube-proxy-9g92t                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:37.387580   13136 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-749300             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:37.387661   13136 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:37.387661   13136 command_runner.go:130] > Allocated resources:
	I0203 12:28:37.387661   13136 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0203 12:28:37.387661   13136 command_runner.go:130] >   Resource           Requests     Limits
	I0203 12:28:37.387661   13136 command_runner.go:130] >   --------           --------     ------
	I0203 12:28:37.387731   13136 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0203 12:28:37.387731   13136 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0203 12:28:37.387761   13136 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0203 12:28:37.387761   13136 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0203 12:28:37.387761   13136 command_runner.go:130] > Events:
	I0203 12:28:37.387799   13136 command_runner.go:130] >   Type     Reason                   Age                From             Message
	I0203 12:28:37.387799   13136 command_runner.go:130] >   ----     ------                   ----               ----             -------
	I0203 12:28:37.387828   13136 command_runner.go:130] >   Normal   Starting                 23m                kube-proxy       
	I0203 12:28:37.387828   13136 command_runner.go:130] >   Normal   Starting                 68s                kube-proxy       
	I0203 12:28:37.387828   13136 command_runner.go:130] >   Normal   Starting                 23m                kubelet          Starting kubelet.
	I0203 12:28:37.387828   13136 command_runner.go:130] >   Normal   NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	I0203 12:28:37.387828   13136 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	I0203 12:28:37.387899   13136 command_runner.go:130] >   Normal   NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	I0203 12:28:37.387899   13136 command_runner.go:130] >   Normal   NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:37.387899   13136 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    23m                kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	I0203 12:28:37.387899   13136 command_runner.go:130] >   Normal   NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:37.387969   13136 command_runner.go:130] >   Normal   NodeHasSufficientMemory  23m                kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	I0203 12:28:37.387969   13136 command_runner.go:130] >   Normal   NodeHasSufficientPID     23m                kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	I0203 12:28:37.387999   13136 command_runner.go:130] >   Normal   Starting                 23m                kubelet          Starting kubelet.
	I0203 12:28:37.388022   13136 command_runner.go:130] >   Normal   RegisteredNode           23m                node-controller  Node multinode-749300 event: Registered Node multinode-749300 in Controller
	I0203 12:28:37.388055   13136 command_runner.go:130] >   Normal   NodeReady                23m                kubelet          Node multinode-749300 status is now: NodeReady
	I0203 12:28:37.388055   13136 command_runner.go:130] >   Normal   Starting                 78s                kubelet          Starting kubelet.
	I0203 12:28:37.388055   13136 command_runner.go:130] >   Normal   NodeHasSufficientMemory  78s (x8 over 78s)  kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	I0203 12:28:37.388055   13136 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    78s (x8 over 78s)  kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	I0203 12:28:37.388115   13136 command_runner.go:130] >   Normal   NodeHasSufficientPID     78s (x7 over 78s)  kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	I0203 12:28:37.388115   13136 command_runner.go:130] >   Normal   NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:37.388115   13136 command_runner.go:130] >   Warning  Rebooted                 72s                kubelet          Node multinode-749300 has been rebooted, boot id: d8bb3b39-ca1e-4113-9882-57d63502f9b2
	I0203 12:28:37.388115   13136 command_runner.go:130] >   Normal   RegisteredNode           69s                node-controller  Node multinode-749300 event: Registered Node multinode-749300 in Controller
	I0203 12:28:37.388186   13136 command_runner.go:130] > Name:               multinode-749300-m02
	I0203 12:28:37.388186   13136 command_runner.go:130] > Roles:              <none>
	I0203 12:28:37.388186   13136 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0203 12:28:37.388216   13136 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0203 12:28:37.388238   13136 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0203 12:28:37.388238   13136 command_runner.go:130] >                     kubernetes.io/hostname=multinode-749300-m02
	I0203 12:28:37.388271   13136 command_runner.go:130] >                     kubernetes.io/os=linux
	I0203 12:28:37.388271   13136 command_runner.go:130] >                     minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	I0203 12:28:37.388271   13136 command_runner.go:130] >                     minikube.k8s.io/name=multinode-749300
	I0203 12:28:37.388271   13136 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0203 12:28:37.388332   13136 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_03T12_07_57_0700
	I0203 12:28:37.388332   13136 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0203 12:28:37.388332   13136 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0203 12:28:37.388332   13136 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0203 12:28:37.388332   13136 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0203 12:28:37.388402   13136 command_runner.go:130] > CreationTimestamp:  Mon, 03 Feb 2025 12:07:57 +0000
	I0203 12:28:37.388402   13136 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0203 12:28:37.388402   13136 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0203 12:28:37.388434   13136 command_runner.go:130] > Unschedulable:      false
	I0203 12:28:37.388434   13136 command_runner.go:130] > Lease:
	I0203 12:28:37.388434   13136 command_runner.go:130] >   HolderIdentity:  multinode-749300-m02
	I0203 12:28:37.388466   13136 command_runner.go:130] >   AcquireTime:     <unset>
	I0203 12:28:37.388466   13136 command_runner.go:130] >   RenewTime:       Mon, 03 Feb 2025 12:24:25 +0000
	I0203 12:28:37.388466   13136 command_runner.go:130] > Conditions:
	I0203 12:28:37.388466   13136 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0203 12:28:37.388466   13136 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0203 12:28:37.388527   13136 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:37.388527   13136 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:37.388577   13136 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:37.388577   13136 command_runner.go:130] >   Ready            Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:37.388615   13136 command_runner.go:130] > Addresses:
	I0203 12:28:37.388633   13136 command_runner.go:130] >   InternalIP:  172.25.8.35
	I0203 12:28:37.388633   13136 command_runner.go:130] >   Hostname:    multinode-749300-m02
	I0203 12:28:37.388633   13136 command_runner.go:130] > Capacity:
	I0203 12:28:37.388633   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:37.388673   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:37.388673   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:37.388673   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:37.388673   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:37.388673   13136 command_runner.go:130] > Allocatable:
	I0203 12:28:37.388723   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:37.388723   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:37.388723   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:37.388723   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:37.388723   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:37.388770   13136 command_runner.go:130] > System Info:
	I0203 12:28:37.388770   13136 command_runner.go:130] >   Machine ID:                 90c62936ba5d4d0aaeb17fe1abbb7ffd
	I0203 12:28:37.388770   13136 command_runner.go:130] >   System UUID:                4e05b2a5-08ff-3741-b04f-b8bc068a3e3b
	I0203 12:28:37.388770   13136 command_runner.go:130] >   Boot ID:                    4aec9dc0-92f8-4c4d-b16a-206948ca045d
	I0203 12:28:37.388770   13136 command_runner.go:130] >   Kernel Version:             5.10.207
	I0203 12:28:37.388819   13136 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0203 12:28:37.388819   13136 command_runner.go:130] >   Operating System:           linux
	I0203 12:28:37.388819   13136 command_runner.go:130] >   Architecture:               amd64
	I0203 12:28:37.388819   13136 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0203 12:28:37.388819   13136 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0203 12:28:37.388868   13136 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0203 12:28:37.388868   13136 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0203 12:28:37.388868   13136 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0203 12:28:37.388868   13136 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0203 12:28:37.388868   13136 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0203 12:28:37.388923   13136 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0203 12:28:37.388923   13136 command_runner.go:130] >   default                     busybox-58667487b6-c66bf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0203 12:28:37.388923   13136 command_runner.go:130] >   kube-system                 kindnet-dc9wq               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0203 12:28:37.388994   13136 command_runner.go:130] >   kube-system                 kube-proxy-ggnq7            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0203 12:28:37.388994   13136 command_runner.go:130] > Allocated resources:
	I0203 12:28:37.389025   13136 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0203 12:28:37.389025   13136 command_runner.go:130] >   Resource           Requests   Limits
	I0203 12:28:37.389025   13136 command_runner.go:130] >   --------           --------   ------
	I0203 12:28:37.389025   13136 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0203 12:28:37.389025   13136 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0203 12:28:37.389025   13136 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0203 12:28:37.389025   13136 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0203 12:28:37.389025   13136 command_runner.go:130] > Events:
	I0203 12:28:37.389025   13136 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0203 12:28:37.389094   13136 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0203 12:28:37.389094   13136 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0203 12:28:37.389094   13136 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-749300-m02 status is now: NodeHasSufficientMemory
	I0203 12:28:37.389094   13136 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-749300-m02 status is now: NodeHasNoDiskPressure
	I0203 12:28:37.389094   13136 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-749300-m02 status is now: NodeHasSufficientPID
	I0203 12:28:37.389158   13136 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:37.389158   13136 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-749300-m02 event: Registered Node multinode-749300-m02 in Controller
	I0203 12:28:37.389158   13136 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-749300-m02 status is now: NodeReady
	I0203 12:28:37.389228   13136 command_runner.go:130] >   Normal  RegisteredNode           69s                node-controller  Node multinode-749300-m02 event: Registered Node multinode-749300-m02 in Controller
	I0203 12:28:37.389228   13136 command_runner.go:130] >   Normal  NodeNotReady             19s                node-controller  Node multinode-749300-m02 status is now: NodeNotReady
	I0203 12:28:37.389228   13136 command_runner.go:130] > Name:               multinode-749300-m03
	I0203 12:28:37.389228   13136 command_runner.go:130] > Roles:              <none>
	I0203 12:28:37.389228   13136 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0203 12:28:37.389228   13136 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0203 12:28:37.389299   13136 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0203 12:28:37.389299   13136 command_runner.go:130] >                     kubernetes.io/hostname=multinode-749300-m03
	I0203 12:28:37.389299   13136 command_runner.go:130] >                     kubernetes.io/os=linux
	I0203 12:28:37.389299   13136 command_runner.go:130] >                     minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	I0203 12:28:37.389299   13136 command_runner.go:130] >                     minikube.k8s.io/name=multinode-749300
	I0203 12:28:37.389299   13136 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0203 12:28:37.389369   13136 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_03T12_22_58_0700
	I0203 12:28:37.389369   13136 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0203 12:28:37.389369   13136 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0203 12:28:37.389369   13136 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0203 12:28:37.389369   13136 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0203 12:28:37.389369   13136 command_runner.go:130] > CreationTimestamp:  Mon, 03 Feb 2025 12:22:58 +0000
	I0203 12:28:37.389439   13136 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0203 12:28:37.389439   13136 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0203 12:28:37.389439   13136 command_runner.go:130] > Unschedulable:      false
	I0203 12:28:37.389492   13136 command_runner.go:130] > Lease:
	I0203 12:28:37.389492   13136 command_runner.go:130] >   HolderIdentity:  multinode-749300-m03
	I0203 12:28:37.389492   13136 command_runner.go:130] >   AcquireTime:     <unset>
	I0203 12:28:37.389492   13136 command_runner.go:130] >   RenewTime:       Mon, 03 Feb 2025 12:23:59 +0000
	I0203 12:28:37.389524   13136 command_runner.go:130] > Conditions:
	I0203 12:28:37.389524   13136 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0203 12:28:37.389562   13136 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0203 12:28:37.389562   13136 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:37.389606   13136 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:37.389606   13136 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:37.389606   13136 command_runner.go:130] >   Ready            Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:37.389606   13136 command_runner.go:130] > Addresses:
	I0203 12:28:37.389655   13136 command_runner.go:130] >   InternalIP:  172.25.0.54
	I0203 12:28:37.389655   13136 command_runner.go:130] >   Hostname:    multinode-749300-m03
	I0203 12:28:37.389655   13136 command_runner.go:130] > Capacity:
	I0203 12:28:37.389655   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:37.389655   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:37.389705   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:37.389705   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:37.389705   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:37.389754   13136 command_runner.go:130] > Allocatable:
	I0203 12:28:37.389754   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:37.389754   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:37.389754   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:37.389754   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:37.389754   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:37.389806   13136 command_runner.go:130] > System Info:
	I0203 12:28:37.389806   13136 command_runner.go:130] >   Machine ID:                 38d40ad4379a4ec5b47dd7ccdbdcfdd3
	I0203 12:28:37.389806   13136 command_runner.go:130] >   System UUID:                605d710b-5b92-ec4e-8d85-0f6c10e8d37a
	I0203 12:28:37.389806   13136 command_runner.go:130] >   Boot ID:                    13f88b1f-ea06-4747-bc4f-774ad0edb09f
	I0203 12:28:37.389806   13136 command_runner.go:130] >   Kernel Version:             5.10.207
	I0203 12:28:37.389806   13136 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0203 12:28:37.389806   13136 command_runner.go:130] >   Operating System:           linux
	I0203 12:28:37.389877   13136 command_runner.go:130] >   Architecture:               amd64
	I0203 12:28:37.389877   13136 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0203 12:28:37.389877   13136 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0203 12:28:37.389877   13136 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0203 12:28:37.389877   13136 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0203 12:28:37.389877   13136 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0203 12:28:37.389946   13136 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0203 12:28:37.389977   13136 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0203 12:28:37.389977   13136 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0203 12:28:37.389977   13136 command_runner.go:130] >   kube-system                 kindnet-bckxx       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0203 12:28:37.390010   13136 command_runner.go:130] >   kube-system                 kube-proxy-w8wrd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0203 12:28:37.390010   13136 command_runner.go:130] > Allocated resources:
	I0203 12:28:37.390079   13136 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0203 12:28:37.390110   13136 command_runner.go:130] >   Resource           Requests   Limits
	I0203 12:28:37.390110   13136 command_runner.go:130] >   --------           --------   ------
	I0203 12:28:37.390110   13136 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0203 12:28:37.390142   13136 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0203 12:28:37.390142   13136 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0203 12:28:37.390142   13136 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0203 12:28:37.390142   13136 command_runner.go:130] > Events:
	I0203 12:28:37.390142   13136 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0203 12:28:37.390142   13136 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0203 12:28:37.390213   13136 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0203 12:28:37.390213   13136 command_runner.go:130] >   Normal  Starting                 5m35s                  kube-proxy       
	I0203 12:28:37.390243   13136 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientMemory
	I0203 12:28:37.390276   13136 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientPID
	I0203 12:28:37.390276   13136 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:37.390276   13136 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-749300-m03 status is now: NodeHasNoDiskPressure
	I0203 12:28:37.390276   13136 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-749300-m03 status is now: NodeReady
	I0203 12:28:37.390276   13136 command_runner.go:130] >   Normal  CIDRAssignmentFailed     5m39s                  cidrAllocator    Node multinode-749300-m03 status is now: CIDRAssignmentFailed
	I0203 12:28:37.390346   13136 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m39s (x2 over 5m39s)  kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientMemory
	I0203 12:28:37.390376   13136 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m39s (x2 over 5m39s)  kubelet          Node multinode-749300-m03 status is now: NodeHasNoDiskPressure
	I0203 12:28:37.390411   13136 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m39s (x2 over 5m39s)  kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientPID
	I0203 12:28:37.390411   13136 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m39s                  kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:37.390411   13136 command_runner.go:130] >   Normal  RegisteredNode           5m38s                  node-controller  Node multinode-749300-m03 event: Registered Node multinode-749300-m03 in Controller
	I0203 12:28:37.390411   13136 command_runner.go:130] >   Normal  NodeReady                5m24s                  kubelet          Node multinode-749300-m03 status is now: NodeReady
	I0203 12:28:37.390411   13136 command_runner.go:130] >   Normal  NodeNotReady             3m47s                  node-controller  Node multinode-749300-m03 status is now: NodeNotReady
	I0203 12:28:37.390481   13136 command_runner.go:130] >   Normal  RegisteredNode           69s                    node-controller  Node multinode-749300-m03 event: Registered Node multinode-749300-m03 in Controller
	I0203 12:28:37.400039   13136 logs.go:123] Gathering logs for kube-scheduler [88c40ca9aa3c] ...
	I0203 12:28:37.400039   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c40ca9aa3c"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! I0203 12:04:50.173813       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.061949       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.062136       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.062240       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.062322       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0203 12:28:37.430714   13136 command_runner.go:130] ! I0203 12:04:52.183111       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! I0203 12:04:52.183265       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:37.430714   13136 command_runner.go:130] ! I0203 12:04:52.186981       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0203 12:28:37.430714   13136 command_runner.go:130] ! I0203 12:04:52.187238       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! I0203 12:04:52.187329       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:37.430714   13136 command_runner.go:130] ! I0203 12:04:52.190286       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.193791       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0203 12:28:37.430714   13136 command_runner.go:130] ! E0203 12:04:52.193853       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.194153       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0203 12:28:37.430714   13136 command_runner.go:130] ! E0203 12:04:52.194308       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.194637       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:37.430714   13136 command_runner.go:130] ! E0203 12:04:52.195017       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.194800       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0203 12:28:37.430714   13136 command_runner.go:130] ! E0203 12:04:52.195139       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.194975       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0203 12:28:37.430714   13136 command_runner.go:130] ! E0203 12:04:52.195284       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.196729       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0203 12:28:37.430714   13136 command_runner.go:130] ! E0203 12:04:52.197161       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.196961       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0203 12:28:37.430714   13136 command_runner.go:130] ! E0203 12:04:52.197453       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.197005       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:37.430714   13136 command_runner.go:130] ! E0203 12:04:52.197828       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.197050       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0203 12:28:37.431981   13136 command_runner.go:130] ! E0203 12:04:52.198044       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.432042   13136 command_runner.go:130] ! W0203 12:04:52.197096       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0203 12:28:37.432042   13136 command_runner.go:130] ! E0203 12:04:52.198641       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.432103   13136 command_runner.go:130] ! W0203 12:04:52.200812       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:37.432208   13136 command_runner.go:130] ! E0203 12:04:52.201002       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0203 12:28:37.432208   13136 command_runner.go:130] ! W0203 12:04:52.201197       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0203 12:28:37.432255   13136 command_runner.go:130] ! E0203 12:04:52.201287       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.432299   13136 command_runner.go:130] ! W0203 12:04:52.201462       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:37.432350   13136 command_runner.go:130] ! E0203 12:04:52.201749       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.432399   13136 command_runner.go:130] ! W0203 12:04:52.203997       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0203 12:28:37.432446   13136 command_runner.go:130] ! E0203 12:04:52.204039       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.432495   13136 command_runner.go:130] ! W0203 12:04:52.204263       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:37.432495   13136 command_runner.go:130] ! E0203 12:04:52.204370       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.432545   13136 command_runner.go:130] ! W0203 12:04:52.204862       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:37.432646   13136 command_runner.go:130] ! E0203 12:04:52.205088       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.432691   13136 command_runner.go:130] ! W0203 12:04:53.007728       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:37.432691   13136 command_runner.go:130] ! E0203 12:04:53.008599       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.432691   13136 command_runner.go:130] ! W0203 12:04:53.048183       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0203 12:28:37.432798   13136 command_runner.go:130] ! E0203 12:04:53.048434       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.432798   13136 command_runner.go:130] ! W0203 12:04:53.164447       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0203 12:28:37.432870   13136 command_runner.go:130] ! E0203 12:04:53.165061       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.432870   13136 command_runner.go:130] ! W0203 12:04:53.169067       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0203 12:28:37.432952   13136 command_runner.go:130] ! E0203 12:04:53.169917       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.432952   13136 command_runner.go:130] ! W0203 12:04:53.247439       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:37.432952   13136 command_runner.go:130] ! E0203 12:04:53.247628       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.433042   13136 command_runner.go:130] ! W0203 12:04:53.427203       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0203 12:28:37.433042   13136 command_runner.go:130] ! E0203 12:04:53.427543       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.433122   13136 command_runner.go:130] ! W0203 12:04:53.471735       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:37.433122   13136 command_runner.go:130] ! E0203 12:04:53.471980       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.433193   13136 command_runner.go:130] ! W0203 12:04:53.482216       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0203 12:28:37.433273   13136 command_runner.go:130] ! E0203 12:04:53.482267       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.433273   13136 command_runner.go:130] ! W0203 12:04:53.497579       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0203 12:28:37.433345   13136 command_runner.go:130] ! E0203 12:04:53.497628       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.433398   13136 command_runner.go:130] ! W0203 12:04:53.544588       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:37.433398   13136 command_runner.go:130] ! E0203 12:04:53.545097       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0203 12:28:37.433480   13136 command_runner.go:130] ! W0203 12:04:53.614992       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0203 12:28:37.433523   13136 command_runner.go:130] ! E0203 12:04:53.615323       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.433523   13136 command_runner.go:130] ! W0203 12:04:53.655102       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0203 12:28:37.433579   13136 command_runner.go:130] ! E0203 12:04:53.655499       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.433628   13136 command_runner.go:130] ! W0203 12:04:53.655303       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0203 12:28:37.433684   13136 command_runner.go:130] ! E0203 12:04:53.656094       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.433684   13136 command_runner.go:130] ! W0203 12:04:53.713710       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:37.433684   13136 command_runner.go:130] ! E0203 12:04:53.713767       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.433765   13136 command_runner.go:130] ! W0203 12:04:53.764352       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0203 12:28:37.433819   13136 command_runner.go:130] ! E0203 12:04:53.764706       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.433862   13136 command_runner.go:130] ! W0203 12:04:53.799751       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:37.433913   13136 command_runner.go:130] ! E0203 12:04:53.800034       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.433972   13136 command_runner.go:130] ! I0203 12:04:56.288855       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:37.433972   13136 command_runner.go:130] ! I0203 12:25:02.182209       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0203 12:28:37.433972   13136 command_runner.go:130] ! I0203 12:25:02.205551       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 12:28:37.434045   13136 command_runner.go:130] ! I0203 12:25:02.205980       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0203 12:28:37.434045   13136 command_runner.go:130] ! E0203 12:25:02.233103       1 run.go:72] "command failed" err="finished without leader elect"
	I0203 12:28:37.446891   13136 logs.go:123] Gathering logs for kube-proxy [cf33452e7244] ...
	I0203 12:28:37.446891   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf33452e7244"
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:27.874759       1 server_linux.go:66] "Using iptables proxy"
	I0203 12:28:37.475222   13136 command_runner.go:130] ! E0203 12:27:28.000541       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:37.475222   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0203 12:28:37.475222   13136 command_runner.go:130] ! 	add table ip kube-proxy
	I0203 12:28:37.475222   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:37.475222   13136 command_runner.go:130] !  >
	I0203 12:28:37.475222   13136 command_runner.go:130] ! E0203 12:27:28.027381       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:37.475222   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0203 12:28:37.475222   13136 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0203 12:28:37.475222   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:37.475222   13136 command_runner.go:130] !  >
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.187333       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.12.244"]
	I0203 12:28:37.475222   13136 command_runner.go:130] ! E0203 12:27:28.189467       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.571807       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.573724       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.574028       1 server_linux.go:170] "Using iptables Proxier"
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.580953       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.586727       1 server.go:497] "Version info" version="v1.32.1"
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.590708       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.619546       1 config.go:199] "Starting service config controller"
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.621538       1 config.go:105] "Starting endpoint slice config controller"
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.621733       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.623181       1 config.go:329] "Starting node config controller"
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.623915       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.626746       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.627120       1 shared_informer.go:320] Caches are synced for service config
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.722206       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.724853       1 shared_informer.go:320] Caches are synced for node config
	I0203 12:28:37.478951   13136 logs.go:123] Gathering logs for kube-proxy [c6dc514e98f6] ...
	I0203 12:28:37.478951   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6dc514e98f6"
	I0203 12:28:37.505336   13136 command_runner.go:130] ! I0203 12:05:01.746820       1 server_linux.go:66] "Using iptables proxy"
	I0203 12:28:37.506127   13136 command_runner.go:130] ! E0203 12:05:01.780088       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:37.506127   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0203 12:28:37.506183   13136 command_runner.go:130] ! 	add table ip kube-proxy
	I0203 12:28:37.506183   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:37.506183   13136 command_runner.go:130] !  >
	I0203 12:28:37.506183   13136 command_runner.go:130] ! E0203 12:05:01.805329       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:37.506183   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0203 12:28:37.506183   13136 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0203 12:28:37.506183   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:37.506183   13136 command_runner.go:130] !  >
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.822582       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.1.53"]
	I0203 12:28:37.506183   13136 command_runner.go:130] ! E0203 12:05:01.822737       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.878001       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.878049       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.878079       1 server_linux.go:170] "Using iptables Proxier"
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.883741       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.884139       1 server.go:497] "Version info" version="v1.32.1"
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.884172       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.886194       1 config.go:199] "Starting service config controller"
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.886246       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.886272       1 config.go:105] "Starting endpoint slice config controller"
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.886277       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.886976       1 config.go:329] "Starting node config controller"
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.887004       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.987328       1 shared_informer.go:320] Caches are synced for node config
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.987379       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.987536       1 shared_informer.go:320] Caches are synced for service config
	I0203 12:28:37.509378   13136 logs.go:123] Gathering logs for kube-controller-manager [fa5ab1df8985] ...
	I0203 12:28:37.509459   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa5ab1df8985"
	I0203 12:28:37.549272   13136 command_runner.go:130] ! I0203 12:27:22.909691       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:37.549272   13136 command_runner.go:130] ! I0203 12:27:23.402652       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0203 12:28:37.549394   13136 command_runner.go:130] ! I0203 12:27:23.402986       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:37.549394   13136 command_runner.go:130] ! I0203 12:27:23.406564       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:37.549520   13136 command_runner.go:130] ! I0203 12:27:23.406976       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:37.549520   13136 command_runner.go:130] ! I0203 12:27:23.407714       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0203 12:28:37.549520   13136 command_runner.go:130] ! I0203 12:27:23.407940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:37.549520   13136 command_runner.go:130] ! I0203 12:27:26.898379       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0203 12:28:37.549520   13136 command_runner.go:130] ! I0203 12:27:26.903089       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0203 12:28:37.549629   13136 command_runner.go:130] ! I0203 12:27:26.920491       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0203 12:28:37.549629   13136 command_runner.go:130] ! I0203 12:27:26.921386       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0203 12:28:37.549629   13136 command_runner.go:130] ! I0203 12:27:26.921411       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0203 12:28:37.549629   13136 command_runner.go:130] ! I0203 12:27:26.927675       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0203 12:28:37.549629   13136 command_runner.go:130] ! I0203 12:27:26.928004       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0203 12:28:37.549629   13136 command_runner.go:130] ! I0203 12:27:26.928034       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0203 12:28:37.549733   13136 command_runner.go:130] ! I0203 12:27:26.930586       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0203 12:28:37.549733   13136 command_runner.go:130] ! I0203 12:27:26.930784       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0203 12:28:37.549733   13136 command_runner.go:130] ! I0203 12:27:26.930813       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0203 12:28:37.549733   13136 command_runner.go:130] ! I0203 12:27:26.933480       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0203 12:28:37.549837   13136 command_runner.go:130] ! I0203 12:27:26.933510       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0203 12:28:37.549837   13136 command_runner.go:130] ! I0203 12:27:26.933688       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0203 12:28:37.549837   13136 command_runner.go:130] ! I0203 12:27:26.937614       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0203 12:28:37.549837   13136 command_runner.go:130] ! I0203 12:27:26.937802       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0203 12:28:37.549837   13136 command_runner.go:130] ! I0203 12:27:26.937815       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0203 12:28:37.549932   13136 command_runner.go:130] ! I0203 12:27:26.941806       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0203 12:28:37.549932   13136 command_runner.go:130] ! I0203 12:27:26.942027       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0203 12:28:37.549932   13136 command_runner.go:130] ! I0203 12:27:26.942037       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0203 12:28:37.549932   13136 command_runner.go:130] ! W0203 12:27:26.985553       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0203 12:28:37.550022   13136 command_runner.go:130] ! I0203 12:27:27.000401       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0203 12:28:37.550022   13136 command_runner.go:130] ! I0203 12:27:27.000471       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0203 12:28:37.550022   13136 command_runner.go:130] ! I0203 12:27:27.002441       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0203 12:28:37.550022   13136 command_runner.go:130] ! I0203 12:27:27.002463       1 shared_informer.go:313] Waiting for caches to sync for node
	I0203 12:28:37.550074   13136 command_runner.go:130] ! I0203 12:27:27.005161       1 shared_informer.go:320] Caches are synced for tokens
	I0203 12:28:37.550074   13136 command_runner.go:130] ! I0203 12:27:27.005494       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0203 12:28:37.550129   13136 command_runner.go:130] ! I0203 12:27:27.005531       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0203 12:28:37.550129   13136 command_runner.go:130] ! I0203 12:27:27.006525       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0203 12:28:37.550129   13136 command_runner.go:130] ! I0203 12:27:27.006554       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0203 12:28:37.550199   13136 command_runner.go:130] ! I0203 12:27:27.006561       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0203 12:28:37.550199   13136 command_runner.go:130] ! I0203 12:27:27.018211       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0203 12:28:37.550199   13136 command_runner.go:130] ! I0203 12:27:27.020298       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:37.550260   13136 command_runner.go:130] ! I0203 12:27:27.020315       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0203 12:28:37.550260   13136 command_runner.go:130] ! I0203 12:27:27.020476       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:37.550310   13136 command_runner.go:130] ! I0203 12:27:27.020496       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0203 12:28:37.550310   13136 command_runner.go:130] ! I0203 12:27:27.020523       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0203 12:28:37.550310   13136 command_runner.go:130] ! I0203 12:27:27.020531       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0203 12:28:37.550360   13136 command_runner.go:130] ! I0203 12:27:27.035455       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0203 12:28:37.550360   13136 command_runner.go:130] ! I0203 12:27:27.035474       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0203 12:28:37.550411   13136 command_runner.go:130] ! I0203 12:27:27.036405       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0203 12:28:37.550411   13136 command_runner.go:130] ! I0203 12:27:27.036423       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0203 12:28:37.550456   13136 command_runner.go:130] ! I0203 12:27:27.036035       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0203 12:28:37.550456   13136 command_runner.go:130] ! I0203 12:27:27.044089       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0203 12:28:37.550506   13136 command_runner.go:130] ! I0203 12:27:27.044099       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0203 12:28:37.550506   13136 command_runner.go:130] ! I0203 12:27:27.055692       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0203 12:28:37.550506   13136 command_runner.go:130] ! I0203 12:27:27.056054       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0203 12:28:37.550552   13136 command_runner.go:130] ! I0203 12:27:27.056069       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0203 12:28:37.550552   13136 command_runner.go:130] ! I0203 12:27:27.078626       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0203 12:28:37.550552   13136 command_runner.go:130] ! I0203 12:27:27.078816       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0203 12:28:37.550601   13136 command_runner.go:130] ! I0203 12:27:27.078939       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0203 12:28:37.550601   13136 command_runner.go:130] ! I0203 12:27:27.078953       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0203 12:28:37.550601   13136 command_runner.go:130] ! I0203 12:27:27.092379       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0203 12:28:37.550646   13136 command_runner.go:130] ! I0203 12:27:27.092403       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0203 12:28:37.550695   13136 command_runner.go:130] ! I0203 12:27:27.092472       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:37.550695   13136 command_runner.go:130] ! I0203 12:27:27.093806       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0203 12:28:37.550741   13136 command_runner.go:130] ! I0203 12:27:27.094076       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0203 12:28:37.550741   13136 command_runner.go:130] ! I0203 12:27:27.094201       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:37.550802   13136 command_runner.go:130] ! I0203 12:27:27.094716       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0203 12:28:37.550802   13136 command_runner.go:130] ! I0203 12:27:27.095015       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:37.550802   13136 command_runner.go:130] ! I0203 12:27:27.095085       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:37.550847   13136 command_runner.go:130] ! I0203 12:27:27.095525       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0203 12:28:37.550847   13136 command_runner.go:130] ! I0203 12:27:27.095975       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0203 12:28:37.550896   13136 command_runner.go:130] ! I0203 12:27:27.095995       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0203 12:28:37.550941   13136 command_runner.go:130] ! I0203 12:27:27.096141       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:37.550941   13136 command_runner.go:130] ! I0203 12:27:27.105052       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0203 12:28:37.551036   13136 command_runner.go:130] ! I0203 12:27:27.108021       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0203 12:28:37.551096   13136 command_runner.go:130] ! I0203 12:27:27.108044       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0203 12:28:37.551134   13136 command_runner.go:130] ! I0203 12:27:27.108849       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0203 12:28:37.551177   13136 command_runner.go:130] ! I0203 12:27:27.111028       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0203 12:28:37.551177   13136 command_runner.go:130] ! I0203 12:27:27.111046       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0203 12:28:37.551220   13136 command_runner.go:130] ! I0203 12:27:27.178113       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0203 12:28:37.551220   13136 command_runner.go:130] ! I0203 12:27:27.178273       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0203 12:28:37.551262   13136 command_runner.go:130] ! I0203 12:27:27.181884       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0203 12:28:37.551262   13136 command_runner.go:130] ! I0203 12:27:27.182308       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0203 12:28:37.551343   13136 command_runner.go:130] ! I0203 12:27:27.182384       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0203 12:28:37.551343   13136 command_runner.go:130] ! I0203 12:27:27.182422       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0203 12:28:37.551387   13136 command_runner.go:130] ! I0203 12:27:27.220586       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0203 12:28:37.551387   13136 command_runner.go:130] ! I0203 12:27:27.220908       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0203 12:28:37.551430   13136 command_runner.go:130] ! I0203 12:27:27.221122       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0203 12:28:37.551430   13136 command_runner.go:130] ! I0203 12:27:27.254107       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0203 12:28:37.551469   13136 command_runner.go:130] ! I0203 12:27:27.259526       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0203 12:28:37.551469   13136 command_runner.go:130] ! I0203 12:27:27.259566       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0203 12:28:37.551519   13136 command_runner.go:130] ! I0203 12:27:27.259616       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0203 12:28:37.551519   13136 command_runner.go:130] ! I0203 12:27:27.259642       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0203 12:28:37.551564   13136 command_runner.go:130] ! W0203 12:27:27.259665       1 shared_informer.go:597] resyncPeriod 16h18m36.581327018s is smaller than resyncCheckPeriod 16h18m48.925429448s and the informer has already started. Changing it to 16h18m48.925429448s
	I0203 12:28:37.551564   13136 command_runner.go:130] ! I0203 12:27:27.259798       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0203 12:28:37.551607   13136 command_runner.go:130] ! I0203 12:27:27.259831       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0203 12:28:37.551647   13136 command_runner.go:130] ! I0203 12:27:27.259851       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0203 12:28:37.551647   13136 command_runner.go:130] ! I0203 12:27:27.259880       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0203 12:28:37.551689   13136 command_runner.go:130] ! I0203 12:27:27.259900       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0203 12:28:37.551689   13136 command_runner.go:130] ! I0203 12:27:27.259918       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0203 12:28:37.551733   13136 command_runner.go:130] ! I0203 12:27:27.259931       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0203 12:28:37.551776   13136 command_runner.go:130] ! I0203 12:27:27.259951       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0203 12:28:37.551776   13136 command_runner.go:130] ! I0203 12:27:27.259973       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0203 12:28:37.551815   13136 command_runner.go:130] ! I0203 12:27:27.259996       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0203 12:28:37.551858   13136 command_runner.go:130] ! I0203 12:27:27.260019       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0203 12:28:37.551858   13136 command_runner.go:130] ! I0203 12:27:27.260033       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0203 12:28:37.551902   13136 command_runner.go:130] ! W0203 12:27:27.260043       1 shared_informer.go:597] resyncPeriod 12h21m15.604254037s is smaller than resyncCheckPeriod 16h18m48.925429448s and the informer has already started. Changing it to 16h18m48.925429448s
	I0203 12:28:37.551902   13136 command_runner.go:130] ! I0203 12:27:27.260097       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0203 12:28:37.551946   13136 command_runner.go:130] ! I0203 12:27:27.260171       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0203 12:28:37.551984   13136 command_runner.go:130] ! I0203 12:27:27.260229       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0203 12:28:37.551984   13136 command_runner.go:130] ! I0203 12:27:27.260265       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0203 12:28:37.552029   13136 command_runner.go:130] ! I0203 12:27:27.260486       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0203 12:28:37.552029   13136 command_runner.go:130] ! I0203 12:27:27.260501       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:37.552073   13136 command_runner.go:130] ! I0203 12:27:27.260524       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0203 12:28:37.552073   13136 command_runner.go:130] ! I0203 12:27:27.267963       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0203 12:28:37.552073   13136 command_runner.go:130] ! I0203 12:27:27.267980       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0203 12:28:37.552117   13136 command_runner.go:130] ! I0203 12:27:27.268261       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0203 12:28:37.552117   13136 command_runner.go:130] ! I0203 12:27:27.268271       1 shared_informer.go:313] Waiting for caches to sync for job
	I0203 12:28:37.552156   13136 command_runner.go:130] ! I0203 12:27:27.275304       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0203 12:28:37.552156   13136 command_runner.go:130] ! I0203 12:27:27.275791       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0203 12:28:37.552200   13136 command_runner.go:130] ! I0203 12:27:27.275805       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0203 12:28:37.552244   13136 command_runner.go:130] ! I0203 12:27:27.282846       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0203 12:28:37.552287   13136 command_runner.go:130] ! I0203 12:27:27.285688       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0203 12:28:37.552287   13136 command_runner.go:130] ! I0203 12:27:27.285931       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0203 12:28:37.552325   13136 command_runner.go:130] ! I0203 12:27:27.285943       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0203 12:28:37.552325   13136 command_runner.go:130] ! I0203 12:27:27.285971       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0203 12:28:37.552368   13136 command_runner.go:130] ! I0203 12:27:27.285981       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0203 12:28:37.552368   13136 command_runner.go:130] ! I0203 12:27:27.294816       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0203 12:28:37.552413   13136 command_runner.go:130] ! I0203 12:27:27.294925       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0203 12:28:37.552413   13136 command_runner.go:130] ! I0203 12:27:27.294936       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0203 12:28:37.552456   13136 command_runner.go:130] ! I0203 12:27:27.318951       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0203 12:28:37.552456   13136 command_runner.go:130] ! I0203 12:27:27.319030       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0203 12:28:37.552496   13136 command_runner.go:130] ! I0203 12:27:27.319040       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0203 12:28:37.552496   13136 command_runner.go:130] ! I0203 12:27:27.355026       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0203 12:28:37.552543   13136 command_runner.go:130] ! I0203 12:27:27.355145       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0203 12:28:37.552543   13136 command_runner.go:130] ! I0203 12:27:27.355157       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0203 12:28:37.552543   13136 command_runner.go:130] ! I0203 12:27:27.502334       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0203 12:28:37.552543   13136 command_runner.go:130] ! I0203 12:27:27.502612       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:37.552543   13136 command_runner.go:130] ! I0203 12:27:27.503231       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0203 12:28:37.552543   13136 command_runner.go:130] ! I0203 12:27:27.503509       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0203 12:28:37.552614   13136 command_runner.go:130] ! I0203 12:27:27.601804       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0203 12:28:37.552614   13136 command_runner.go:130] ! I0203 12:27:27.601861       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0203 12:28:37.552614   13136 command_runner.go:130] ! I0203 12:27:27.702241       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0203 12:28:37.552614   13136 command_runner.go:130] ! I0203 12:27:27.702332       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0203 12:28:37.552614   13136 command_runner.go:130] ! I0203 12:27:27.702378       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0203 12:28:37.552701   13136 command_runner.go:130] ! I0203 12:27:27.702389       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0203 12:28:37.552701   13136 command_runner.go:130] ! I0203 12:27:27.752020       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0203 12:28:37.552734   13136 command_runner.go:130] ! I0203 12:27:27.752619       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0203 12:28:37.552734   13136 command_runner.go:130] ! I0203 12:27:27.752706       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0203 12:28:37.552734   13136 command_runner.go:130] ! I0203 12:27:27.803085       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0203 12:28:37.552793   13136 command_runner.go:130] ! I0203 12:27:27.803455       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0203 12:28:37.552793   13136 command_runner.go:130] ! I0203 12:27:27.803481       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0203 12:28:37.552793   13136 command_runner.go:130] ! I0203 12:27:27.855074       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0203 12:28:37.552836   13136 command_runner.go:130] ! I0203 12:27:27.855248       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0203 12:28:37.552868   13136 command_runner.go:130] ! I0203 12:27:27.855184       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0203 12:28:37.552868   13136 command_runner.go:130] ! I0203 12:27:27.855399       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0203 12:28:37.552868   13136 command_runner.go:130] ! I0203 12:27:27.906335       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0203 12:28:37.552932   13136 command_runner.go:130] ! I0203 12:27:27.906694       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0203 12:28:37.552932   13136 command_runner.go:130] ! I0203 12:27:27.906991       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0203 12:28:37.552963   13136 command_runner.go:130] ! I0203 12:27:27.907151       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0203 12:28:37.552963   13136 command_runner.go:130] ! I0203 12:27:27.952285       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0203 12:28:37.552963   13136 command_runner.go:130] ! I0203 12:27:27.952811       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0203 12:28:37.553021   13136 command_runner.go:130] ! I0203 12:27:27.953099       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0203 12:28:37.553021   13136 command_runner.go:130] ! I0203 12:27:28.007756       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0203 12:28:37.553021   13136 command_runner.go:130] ! I0203 12:27:28.008110       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0203 12:28:37.553085   13136 command_runner.go:130] ! I0203 12:27:28.008081       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0203 12:28:37.553085   13136 command_runner.go:130] ! I0203 12:27:28.008316       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0203 12:28:37.553085   13136 command_runner.go:130] ! I0203 12:27:28.056312       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0203 12:28:37.553151   13136 command_runner.go:130] ! I0203 12:27:28.059984       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0203 12:28:37.553151   13136 command_runner.go:130] ! I0203 12:27:28.060009       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0203 12:28:37.553183   13136 command_runner.go:130] ! I0203 12:27:28.076985       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:37.553183   13136 command_runner.go:130] ! I0203 12:27:28.123054       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300\" does not exist"
	I0203 12:28:37.553252   13136 command_runner.go:130] ! I0203 12:27:28.125466       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m02\" does not exist"
	I0203 12:28:37.553283   13136 command_runner.go:130] ! I0203 12:27:28.127487       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m03\" does not exist"
	I0203 12:28:37.553312   13136 command_runner.go:130] ! I0203 12:27:28.128305       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0203 12:28:37.553312   13136 command_runner.go:130] ! I0203 12:27:28.130715       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:37.553312   13136 command_runner.go:130] ! I0203 12:27:28.131611       1 shared_informer.go:320] Caches are synced for cronjob
	I0203 12:28:37.553312   13136 command_runner.go:130] ! I0203 12:27:28.137580       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0203 12:28:37.553378   13136 command_runner.go:130] ! I0203 12:27:28.142883       1 shared_informer.go:320] Caches are synced for TTL
	I0203 12:28:37.553378   13136 command_runner.go:130] ! I0203 12:27:28.155436       1 shared_informer.go:320] Caches are synced for daemon sets
	I0203 12:28:37.553378   13136 command_runner.go:130] ! I0203 12:27:28.169742       1 shared_informer.go:320] Caches are synced for crt configmap
	I0203 12:28:37.553408   13136 command_runner.go:130] ! I0203 12:27:28.178458       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0203 12:28:37.553408   13136 command_runner.go:130] ! I0203 12:27:28.179559       1 shared_informer.go:320] Caches are synced for job
	I0203 12:28:37.553462   13136 command_runner.go:130] ! I0203 12:27:28.184280       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0203 12:28:37.553462   13136 command_runner.go:130] ! I0203 12:27:28.184866       1 shared_informer.go:320] Caches are synced for endpoint
	I0203 12:28:37.553462   13136 command_runner.go:130] ! I0203 12:27:28.185203       1 shared_informer.go:320] Caches are synced for persistent volume
	I0203 12:28:37.553504   13136 command_runner.go:130] ! I0203 12:27:28.188183       1 shared_informer.go:320] Caches are synced for disruption
	I0203 12:28:37.553528   13136 command_runner.go:130] ! I0203 12:27:28.191185       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0203 12:28:37.553528   13136 command_runner.go:130] ! I0203 12:27:28.192463       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0203 12:28:37.553528   13136 command_runner.go:130] ! I0203 12:27:28.192932       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0203 12:28:37.553528   13136 command_runner.go:130] ! I0203 12:27:28.195813       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:37.553528   13136 command_runner.go:130] ! I0203 12:27:28.197022       1 shared_informer.go:320] Caches are synced for expand
	I0203 12:28:37.553594   13136 command_runner.go:130] ! I0203 12:27:28.197371       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0203 12:28:37.553594   13136 command_runner.go:130] ! I0203 12:27:28.203607       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0203 12:28:37.553624   13136 command_runner.go:130] ! I0203 12:27:28.205940       1 shared_informer.go:320] Caches are synced for node
	I0203 12:28:37.553624   13136 command_runner.go:130] ! I0203 12:27:28.206428       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0203 12:28:37.553624   13136 command_runner.go:130] ! I0203 12:27:28.206719       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0203 12:28:37.553624   13136 command_runner.go:130] ! I0203 12:27:28.206743       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0203 12:28:37.553684   13136 command_runner.go:130] ! I0203 12:27:28.206759       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0203 12:28:37.553714   13136 command_runner.go:130] ! I0203 12:27:28.207125       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.553714   13136 command_runner.go:130] ! I0203 12:27:28.207167       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.553749   13136 command_runner.go:130] ! I0203 12:27:28.207249       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.553749   13136 command_runner.go:130] ! I0203 12:27:28.207497       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0203 12:28:37.553749   13136 command_runner.go:130] ! I0203 12:27:28.212287       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0203 12:28:37.553790   13136 command_runner.go:130] ! I0203 12:27:28.212651       1 shared_informer.go:320] Caches are synced for taint
	I0203 12:28:37.553790   13136 command_runner.go:130] ! I0203 12:27:28.216545       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0203 12:28:37.553790   13136 command_runner.go:130] ! I0203 12:27:28.213230       1 shared_informer.go:320] Caches are synced for GC
	I0203 12:28:37.553790   13136 command_runner.go:130] ! I0203 12:27:28.220697       1 shared_informer.go:320] Caches are synced for PV protection
	I0203 12:28:37.553790   13136 command_runner.go:130] ! I0203 12:27:28.221685       1 shared_informer.go:320] Caches are synced for namespace
	I0203 12:28:37.553858   13136 command_runner.go:130] ! I0203 12:27:28.223956       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0203 12:28:37.553889   13136 command_runner.go:130] ! I0203 12:27:28.214977       1 shared_informer.go:320] Caches are synced for ephemeral
	I0203 12:28:37.553889   13136 command_runner.go:130] ! I0203 12:27:28.215855       1 shared_informer.go:320] Caches are synced for attach detach
	I0203 12:28:37.553889   13136 command_runner.go:130] ! I0203 12:27:28.229339       1 shared_informer.go:320] Caches are synced for deployment
	I0203 12:28:37.553889   13136 command_runner.go:130] ! I0203 12:27:28.231152       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:37.553889   13136 command_runner.go:130] ! I0203 12:27:28.240053       1 shared_informer.go:320] Caches are synced for stateful set
	I0203 12:28:37.553945   13136 command_runner.go:130] ! I0203 12:27:28.244571       1 shared_informer.go:320] Caches are synced for HPA
	I0203 12:28:37.553945   13136 command_runner.go:130] ! I0203 12:27:28.253632       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0203 12:28:37.553988   13136 command_runner.go:130] ! I0203 12:27:28.253905       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:37.554012   13136 command_runner.go:130] ! I0203 12:27:28.254335       1 shared_informer.go:320] Caches are synced for PVC protection
	I0203 12:28:37.554012   13136 command_runner.go:130] ! I0203 12:27:28.256579       1 shared_informer.go:320] Caches are synced for service account
	I0203 12:28:37.554012   13136 command_runner.go:130] ! I0203 12:27:28.261559       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:37.554012   13136 command_runner.go:130] ! I0203 12:27:28.272196       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.554084   13136 command_runner.go:130] ! I0203 12:27:28.278627       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m02"
	I0203 12:28:37.554084   13136 command_runner.go:130] ! I0203 12:27:28.278875       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m03"
	I0203 12:28:37.554114   13136 command_runner.go:130] ! I0203 12:27:28.279161       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300"
	I0203 12:28:37.554114   13136 command_runner.go:130] ! I0203 12:27:28.279427       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:37.554114   13136 command_runner.go:130] ! I0203 12:27:28.279877       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.554114   13136 command_runner.go:130] ! I0203 12:27:28.279830       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0203 12:28:37.554182   13136 command_runner.go:130] ! I0203 12:27:28.304983       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:37.554214   13136 command_runner.go:130] ! I0203 12:27:28.305231       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0203 12:28:37.554214   13136 command_runner.go:130] ! I0203 12:27:28.305564       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0203 12:28:37.554214   13136 command_runner.go:130] ! I0203 12:27:28.321623       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0203 12:28:37.554214   13136 command_runner.go:130] ! I0203 12:27:28.355620       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.554288   13136 command_runner.go:130] ! I0203 12:27:28.537851       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="345.769991ms"
	I0203 12:28:37.554288   13136 command_runner.go:130] ! I0203 12:27:28.538124       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="123.5µs"
	I0203 12:28:37.554319   13136 command_runner.go:130] ! I0203 12:27:28.549449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="358.01756ms"
	I0203 12:28:37.554319   13136 command_runner.go:130] ! I0203 12:27:28.551039       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="41.301µs"
	I0203 12:28:37.554319   13136 command_runner.go:130] ! I0203 12:27:38.365008       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.554374   13136 command_runner.go:130] ! I0203 12:28:10.033136       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.554374   13136 command_runner.go:130] ! I0203 12:28:10.034663       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:37.554374   13136 command_runner.go:130] ! I0203 12:28:10.065494       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.554443   13136 command_runner.go:130] ! I0203 12:28:13.309331       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.554443   13136 command_runner.go:130] ! I0203 12:28:18.332821       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.554443   13136 command_runner.go:130] ! I0203 12:28:18.352713       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.554485   13136 command_runner.go:130] ! I0203 12:28:18.408588       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="26.468372ms"
	I0203 12:28:37.554485   13136 command_runner.go:130] ! I0203 12:28:18.409083       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="46.101µs"
	I0203 12:28:37.554485   13136 command_runner.go:130] ! I0203 12:28:23.502598       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.554547   13136 command_runner.go:130] ! I0203 12:28:31.524388       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="21.544593ms"
	I0203 12:28:37.554547   13136 command_runner.go:130] ! I0203 12:28:31.524629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="171.802µs"
	I0203 12:28:37.554607   13136 command_runner.go:130] ! I0203 12:28:31.550980       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="91.601µs"
	I0203 12:28:37.554607   13136 command_runner.go:130] ! I0203 12:28:31.616132       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="36.896902ms"
	I0203 12:28:37.554607   13136 command_runner.go:130] ! I0203 12:28:31.618203       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="115.002µs"
	I0203 12:28:37.571137   13136 logs.go:123] Gathering logs for kube-controller-manager [8ade10c0fb09] ...
	I0203 12:28:37.571137   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ade10c0fb09"
	I0203 12:28:37.601512   13136 command_runner.go:130] ! I0203 12:04:50.328199       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:37.601558   13136 command_runner.go:130] ! I0203 12:04:50.683234       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0203 12:28:37.601558   13136 command_runner.go:130] ! I0203 12:04:50.683563       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:37.601558   13136 command_runner.go:130] ! I0203 12:04:50.687907       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:37.601651   13136 command_runner.go:130] ! I0203 12:04:50.687950       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0203 12:28:37.601651   13136 command_runner.go:130] ! I0203 12:04:50.687972       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:37.601651   13136 command_runner.go:130] ! I0203 12:04:50.687984       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:37.602051   13136 command_runner.go:130] ! I0203 12:04:55.071680       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0203 12:28:37.602051   13136 command_runner.go:130] ! I0203 12:04:55.072106       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0203 12:28:37.602051   13136 command_runner.go:130] ! I0203 12:04:55.089226       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0203 12:28:37.602051   13136 command_runner.go:130] ! I0203 12:04:55.089889       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0203 12:28:37.602051   13136 command_runner.go:130] ! I0203 12:04:55.091177       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0203 12:28:37.602051   13136 command_runner.go:130] ! I0203 12:04:55.113934       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0203 12:28:37.602201   13136 command_runner.go:130] ! I0203 12:04:55.114137       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:37.602201   13136 command_runner.go:130] ! I0203 12:04:55.114294       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0203 12:28:37.602242   13136 command_runner.go:130] ! I0203 12:04:55.115111       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0203 12:28:37.602242   13136 command_runner.go:130] ! I0203 12:04:55.143403       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0203 12:28:37.602242   13136 command_runner.go:130] ! I0203 12:04:55.146241       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0203 12:28:37.602340   13136 command_runner.go:130] ! I0203 12:04:55.146450       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0203 12:28:37.602340   13136 command_runner.go:130] ! I0203 12:04:55.167456       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0203 12:28:37.602340   13136 command_runner.go:130] ! I0203 12:04:55.168207       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0203 12:28:37.602340   13136 command_runner.go:130] ! I0203 12:04:55.169697       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0203 12:28:37.602340   13136 command_runner.go:130] ! I0203 12:04:55.170035       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0203 12:28:37.602340   13136 command_runner.go:130] ! I0203 12:04:55.172429       1 shared_informer.go:320] Caches are synced for tokens
	I0203 12:28:37.602340   13136 command_runner.go:130] ! W0203 12:04:55.207419       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0203 12:28:37.602340   13136 command_runner.go:130] ! I0203 12:04:55.220184       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0203 12:28:37.602496   13136 command_runner.go:130] ! I0203 12:04:55.220335       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0203 12:28:37.602496   13136 command_runner.go:130] ! I0203 12:04:55.220802       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0203 12:28:37.602496   13136 command_runner.go:130] ! I0203 12:04:55.220818       1 shared_informer.go:313] Waiting for caches to sync for node
	I0203 12:28:37.602496   13136 command_runner.go:130] ! I0203 12:04:55.236689       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0203 12:28:37.602496   13136 command_runner.go:130] ! I0203 12:04:55.236985       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0203 12:28:37.602496   13136 command_runner.go:130] ! I0203 12:04:55.237003       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0203 12:28:37.602606   13136 command_runner.go:130] ! I0203 12:04:55.260414       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0203 12:28:37.602606   13136 command_runner.go:130] ! I0203 12:04:55.260996       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0203 12:28:37.602606   13136 command_runner.go:130] ! I0203 12:04:55.261428       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0203 12:28:37.602606   13136 command_runner.go:130] ! I0203 12:04:55.289640       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0203 12:28:37.602692   13136 command_runner.go:130] ! I0203 12:04:55.289893       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0203 12:28:37.602692   13136 command_runner.go:130] ! I0203 12:04:55.290571       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0203 12:28:37.602692   13136 command_runner.go:130] ! I0203 12:04:55.290736       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0203 12:28:37.602692   13136 command_runner.go:130] ! I0203 12:04:55.314846       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0203 12:28:37.602775   13136 command_runner.go:130] ! I0203 12:04:55.315076       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0203 12:28:37.602775   13136 command_runner.go:130] ! I0203 12:04:55.315101       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0203 12:28:37.602775   13136 command_runner.go:130] ! I0203 12:04:55.319462       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0203 12:28:37.602775   13136 command_runner.go:130] ! I0203 12:04:55.319527       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0203 12:28:37.602859   13136 command_runner.go:130] ! I0203 12:04:55.319535       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0203 12:28:37.602859   13136 command_runner.go:130] ! I0203 12:04:55.319689       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0203 12:28:37.602859   13136 command_runner.go:130] ! I0203 12:04:55.319723       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0203 12:28:37.602859   13136 command_runner.go:130] ! I0203 12:04:55.319733       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0203 12:28:37.602859   13136 command_runner.go:130] ! I0203 12:04:55.446823       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0203 12:28:37.602949   13136 command_runner.go:130] ! I0203 12:04:55.446851       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0203 12:28:37.602949   13136 command_runner.go:130] ! I0203 12:04:55.446960       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0203 12:28:37.602949   13136 command_runner.go:130] ! I0203 12:04:55.446972       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0203 12:28:37.602949   13136 command_runner.go:130] ! I0203 12:04:55.579930       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0203 12:28:37.603034   13136 command_runner.go:130] ! I0203 12:04:55.580047       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0203 12:28:37.603034   13136 command_runner.go:130] ! I0203 12:04:55.580079       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0203 12:28:37.603034   13136 command_runner.go:130] ! I0203 12:04:55.730127       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0203 12:28:37.603034   13136 command_runner.go:130] ! I0203 12:04:55.730301       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0203 12:28:37.603118   13136 command_runner.go:130] ! I0203 12:04:55.730314       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0203 12:28:37.603118   13136 command_runner.go:130] ! I0203 12:04:55.889482       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0203 12:28:37.603282   13136 command_runner.go:130] ! I0203 12:04:55.889843       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:55.889907       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.030244       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.030535       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.030566       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.182222       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.183153       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.183191       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.226256       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.226280       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.226330       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.226371       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.226410       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.382971       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.383201       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.383218       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.687449       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.687532       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.687548       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.832606       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.832640       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.832542       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0203 12:28:37.604200   13136 command_runner.go:130] ! I0203 12:04:56.984351       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0203 12:28:37.604200   13136 command_runner.go:130] ! I0203 12:04:56.984538       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0203 12:28:37.604200   13136 command_runner.go:130] ! I0203 12:04:56.984550       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0203 12:28:37.604200   13136 command_runner.go:130] ! I0203 12:04:57.130440       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0203 12:28:37.604200   13136 command_runner.go:130] ! I0203 12:04:57.131375       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0203 12:28:37.604200   13136 command_runner.go:130] ! I0203 12:04:57.131428       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0203 12:28:37.604200   13136 command_runner.go:130] ! I0203 12:04:57.284265       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:37.604200   13136 command_runner.go:130] ! I0203 12:04:57.284330       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:37.604200   13136 command_runner.go:130] ! I0203 12:04:57.284343       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0203 12:28:37.604200   13136 command_runner.go:130] ! I0203 12:04:57.431498       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0203 12:28:37.604378   13136 command_runner.go:130] ! I0203 12:04:57.431577       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0203 12:28:37.604378   13136 command_runner.go:130] ! I0203 12:04:57.432308       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0203 12:28:37.604378   13136 command_runner.go:130] ! I0203 12:04:57.580329       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0203 12:28:37.604378   13136 command_runner.go:130] ! I0203 12:04:57.580661       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0203 12:28:37.604469   13136 command_runner.go:130] ! I0203 12:04:57.580693       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0203 12:28:37.604469   13136 command_runner.go:130] ! I0203 12:04:57.730504       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0203 12:28:37.604469   13136 command_runner.go:130] ! I0203 12:04:57.730629       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0203 12:28:37.604469   13136 command_runner.go:130] ! I0203 12:04:57.730638       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0203 12:28:37.604561   13136 command_runner.go:130] ! I0203 12:04:57.730646       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:57.730719       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:57.730820       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:57.880536       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:57.880666       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:57.881079       1 shared_informer.go:313] Waiting for caches to sync for job
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.186601       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.186797       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187086       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! W0203 12:04:58.187187       1 shared_informer.go:597] resyncPeriod 18h8m42.862796871s is smaller than resyncCheckPeriod 21h1m9.302357504s and the informer has already started. Changing it to 21h1m9.302357504s
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187252       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187334       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187356       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187374       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187391       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187427       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187455       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! W0203 12:04:58.187474       1 shared_informer.go:597] resyncPeriod 19h41m25.464232572s is smaller than resyncCheckPeriod 21h1m9.302357504s and the informer has already started. Changing it to 21h1m9.302357504s
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187523       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187588       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187662       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187679       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187699       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187967       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.188030       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.188069       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0203 12:28:37.605143   13136 command_runner.go:130] ! I0203 12:04:58.188097       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0203 12:28:37.605143   13136 command_runner.go:130] ! I0203 12:04:58.188127       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0203 12:28:37.605143   13136 command_runner.go:130] ! I0203 12:04:58.188181       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0203 12:28:37.605143   13136 command_runner.go:130] ! I0203 12:04:58.188248       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0203 12:28:37.605143   13136 command_runner.go:130] ! I0203 12:04:58.188271       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:37.605237   13136 command_runner.go:130] ! I0203 12:04:58.188294       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0203 12:28:37.605237   13136 command_runner.go:130] ! I0203 12:04:58.434011       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0203 12:28:37.605237   13136 command_runner.go:130] ! I0203 12:04:58.434132       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0203 12:28:37.605237   13136 command_runner.go:130] ! I0203 12:04:58.434145       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0203 12:28:37.605237   13136 command_runner.go:130] ! I0203 12:04:58.476316       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0203 12:28:37.605330   13136 command_runner.go:130] ! I0203 12:04:58.478848       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0203 12:28:37.605330   13136 command_runner.go:130] ! I0203 12:04:58.478330       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0203 12:28:37.605330   13136 command_runner.go:130] ! I0203 12:04:58.478362       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:37.605415   13136 command_runner.go:130] ! I0203 12:04:58.478346       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0203 12:28:37.605415   13136 command_runner.go:130] ! I0203 12:04:58.479085       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0203 12:28:37.605415   13136 command_runner.go:130] ! I0203 12:04:58.478432       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0203 12:28:37.605415   13136 command_runner.go:130] ! I0203 12:04:58.479097       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0203 12:28:37.605501   13136 command_runner.go:130] ! I0203 12:04:58.478442       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:37.605501   13136 command_runner.go:130] ! I0203 12:04:58.478482       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0203 12:28:37.605501   13136 command_runner.go:130] ! I0203 12:04:58.479316       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:37.605501   13136 command_runner.go:130] ! I0203 12:04:58.478490       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:37.605586   13136 command_runner.go:130] ! I0203 12:04:58.478533       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:37.605586   13136 command_runner.go:130] ! I0203 12:04:58.630437       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0203 12:28:37.605586   13136 command_runner.go:130] ! I0203 12:04:58.630476       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0203 12:28:37.605586   13136 command_runner.go:130] ! I0203 12:04:58.630884       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0203 12:28:37.605586   13136 command_runner.go:130] ! I0203 12:04:58.630985       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0203 12:28:37.605681   13136 command_runner.go:130] ! I0203 12:04:58.825850       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0203 12:28:37.605681   13136 command_runner.go:130] ! I0203 12:04:58.826005       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0203 12:28:37.605681   13136 command_runner.go:130] ! I0203 12:04:59.025218       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0203 12:28:37.605721   13136 command_runner.go:130] ! I0203 12:04:59.025576       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0203 12:28:37.605749   13136 command_runner.go:130] ! I0203 12:04:59.025879       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0203 12:28:37.605749   13136 command_runner.go:130] ! I0203 12:04:59.026140       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0203 12:28:37.605793   13136 command_runner.go:130] ! I0203 12:04:59.076054       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0203 12:28:37.605833   13136 command_runner.go:130] ! I0203 12:04:59.076201       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0203 12:28:37.605877   13136 command_runner.go:130] ! I0203 12:04:59.229685       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0203 12:28:37.605918   13136 command_runner.go:130] ! I0203 12:04:59.229852       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0203 12:28:37.605918   13136 command_runner.go:130] ! I0203 12:04:59.384463       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0203 12:28:37.605963   13136 command_runner.go:130] ! I0203 12:04:59.384562       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0203 12:28:37.605963   13136 command_runner.go:130] ! I0203 12:04:59.384584       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0203 12:28:37.606003   13136 command_runner.go:130] ! I0203 12:04:59.384709       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0203 12:28:37.606003   13136 command_runner.go:130] ! I0203 12:04:59.384734       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0203 12:28:37.606052   13136 command_runner.go:130] ! I0203 12:04:59.531643       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0203 12:28:37.606093   13136 command_runner.go:130] ! I0203 12:04:59.535171       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0203 12:28:37.606093   13136 command_runner.go:130] ! I0203 12:04:59.535208       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0203 12:28:37.606138   13136 command_runner.go:130] ! I0203 12:04:59.555530       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:37.606138   13136 command_runner.go:130] ! I0203 12:04:59.582679       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300\" does not exist"
	I0203 12:28:37.606178   13136 command_runner.go:130] ! I0203 12:04:59.593117       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:37.606222   13136 command_runner.go:130] ! I0203 12:04:59.615597       1 shared_informer.go:320] Caches are synced for expand
	I0203 12:28:37.606222   13136 command_runner.go:130] ! I0203 12:04:59.619951       1 shared_informer.go:320] Caches are synced for taint
	I0203 12:28:37.606262   13136 command_runner.go:130] ! I0203 12:04:59.620233       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0203 12:28:37.606262   13136 command_runner.go:130] ! I0203 12:04:59.621144       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300"
	I0203 12:28:37.606307   13136 command_runner.go:130] ! I0203 12:04:59.621999       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0203 12:28:37.606347   13136 command_runner.go:130] ! I0203 12:04:59.620965       1 shared_informer.go:320] Caches are synced for node
	I0203 12:28:37.606347   13136 command_runner.go:130] ! I0203 12:04:59.622115       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0203 12:28:37.606391   13136 command_runner.go:130] ! I0203 12:04:59.622196       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0203 12:28:37.606391   13136 command_runner.go:130] ! I0203 12:04:59.622213       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0203 12:28:37.606431   13136 command_runner.go:130] ! I0203 12:04:59.622220       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0203 12:28:37.606431   13136 command_runner.go:130] ! I0203 12:04:59.627214       1 shared_informer.go:320] Caches are synced for disruption
	I0203 12:28:37.606475   13136 command_runner.go:130] ! I0203 12:04:59.627299       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0203 12:28:37.606475   13136 command_runner.go:130] ! I0203 12:04:59.627517       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0203 12:28:37.606514   13136 command_runner.go:130] ! I0203 12:04:59.630821       1 shared_informer.go:320] Caches are synced for persistent volume
	I0203 12:28:37.606514   13136 command_runner.go:130] ! I0203 12:04:59.631018       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0203 12:28:37.606558   13136 command_runner.go:130] ! I0203 12:04:59.631607       1 shared_informer.go:320] Caches are synced for PV protection
	I0203 12:28:37.606558   13136 command_runner.go:130] ! I0203 12:04:59.632152       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0203 12:28:37.606599   13136 command_runner.go:130] ! I0203 12:04:59.632358       1 shared_informer.go:320] Caches are synced for service account
	I0203 12:28:37.606599   13136 command_runner.go:130] ! I0203 12:04:59.632692       1 shared_informer.go:320] Caches are synced for cronjob
	I0203 12:28:37.606643   13136 command_runner.go:130] ! I0203 12:04:59.632840       1 shared_informer.go:320] Caches are synced for TTL
	I0203 12:28:37.606643   13136 command_runner.go:130] ! I0203 12:04:59.634133       1 shared_informer.go:320] Caches are synced for GC
	I0203 12:28:37.606643   13136 command_runner.go:130] ! I0203 12:04:59.634183       1 shared_informer.go:320] Caches are synced for namespace
	I0203 12:28:37.606682   13136 command_runner.go:130] ! I0203 12:04:59.637337       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0203 12:28:37.606727   13136 command_runner.go:130] ! I0203 12:04:59.637530       1 shared_informer.go:320] Caches are synced for crt configmap
	I0203 12:28:37.606727   13136 command_runner.go:130] ! I0203 12:04:59.644447       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300" podCIDRs=["10.244.0.0/24"]
	I0203 12:28:37.606767   13136 command_runner.go:130] ! I0203 12:04:59.644496       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.606811   13136 command_runner.go:130] ! I0203 12:04:59.644518       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.606811   13136 command_runner.go:130] ! I0203 12:04:59.647453       1 shared_informer.go:320] Caches are synced for deployment
	I0203 12:28:37.606851   13136 command_runner.go:130] ! I0203 12:04:59.647468       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0203 12:28:37.606851   13136 command_runner.go:130] ! I0203 12:04:59.661087       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:37.606895   13136 command_runner.go:130] ! I0203 12:04:59.662500       1 shared_informer.go:320] Caches are synced for ephemeral
	I0203 12:28:37.606895   13136 command_runner.go:130] ! I0203 12:04:59.679063       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0203 12:28:37.606934   13136 command_runner.go:130] ! I0203 12:04:59.679241       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0203 12:28:37.606934   13136 command_runner.go:130] ! I0203 12:04:59.679489       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:37.606978   13136 command_runner.go:130] ! I0203 12:04:59.679271       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0203 12:28:37.606978   13136 command_runner.go:130] ! I0203 12:04:59.680515       1 shared_informer.go:320] Caches are synced for daemon sets
	I0203 12:28:37.607018   13136 command_runner.go:130] ! I0203 12:04:59.680894       1 shared_informer.go:320] Caches are synced for stateful set
	I0203 12:28:37.607018   13136 command_runner.go:130] ! I0203 12:04:59.682157       1 shared_informer.go:320] Caches are synced for job
	I0203 12:28:37.607062   13136 command_runner.go:130] ! I0203 12:04:59.686733       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0203 12:28:37.607062   13136 command_runner.go:130] ! I0203 12:04:59.688328       1 shared_informer.go:320] Caches are synced for HPA
	I0203 12:28:37.607102   13136 command_runner.go:130] ! I0203 12:04:59.688383       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0203 12:28:37.607313   13136 command_runner.go:130] ! I0203 12:04:59.697934       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0203 12:28:37.607353   13136 command_runner.go:130] ! I0203 12:04:59.698063       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0203 12:28:37.607353   13136 command_runner.go:130] ! I0203 12:04:59.688399       1 shared_informer.go:320] Caches are synced for PVC protection
	I0203 12:28:37.607398   13136 command_runner.go:130] ! I0203 12:04:59.688409       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0203 12:28:37.607398   13136 command_runner.go:130] ! I0203 12:04:59.688419       1 shared_informer.go:320] Caches are synced for attach detach
	I0203 12:28:37.607438   13136 command_runner.go:130] ! I0203 12:04:59.688482       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:37.607438   13136 command_runner.go:130] ! I0203 12:04:59.697636       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:37.607481   13136 command_runner.go:130] ! I0203 12:04:59.697649       1 shared_informer.go:320] Caches are synced for endpoint
	I0203 12:28:37.607481   13136 command_runner.go:130] ! I0203 12:04:59.714625       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:37.607521   13136 command_runner.go:130] ! I0203 12:04:59.714677       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0203 12:28:37.607521   13136 command_runner.go:130] ! I0203 12:04:59.714688       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0203 12:28:37.607565   13136 command_runner.go:130] ! I0203 12:05:00.046777       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.607605   13136 command_runner.go:130] ! I0203 12:05:00.818009       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="311.273381ms"
	I0203 12:28:37.607605   13136 command_runner.go:130] ! I0203 12:05:00.848718       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="30.361418ms"
	I0203 12:28:37.607649   13136 command_runner.go:130] ! I0203 12:05:00.848801       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.501µs"
	I0203 12:28:37.607689   13136 command_runner.go:130] ! I0203 12:05:01.040466       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="91.174094ms"
	I0203 12:28:37.607733   13136 command_runner.go:130] ! I0203 12:05:01.060761       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.181113ms"
	I0203 12:28:37.607733   13136 command_runner.go:130] ! I0203 12:05:01.062232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="51.701µs"
	I0203 12:28:37.607773   13136 command_runner.go:130] ! I0203 12:05:21.819966       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.607773   13136 command_runner.go:130] ! I0203 12:05:21.843034       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.607817   13136 command_runner.go:130] ! I0203 12:05:21.853094       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="295.503µs"
	I0203 12:28:37.607857   13136 command_runner.go:130] ! I0203 12:05:21.889706       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="83.9µs"
	I0203 12:28:37.607857   13136 command_runner.go:130] ! I0203 12:05:23.170298       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="56.1µs"
	I0203 12:28:37.607902   13136 command_runner.go:130] ! I0203 12:05:24.187762       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="23.236374ms"
	I0203 12:28:37.607942   13136 command_runner.go:130] ! I0203 12:05:24.188513       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="90.9µs"
	I0203 12:28:37.607942   13136 command_runner.go:130] ! I0203 12:05:24.626780       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0203 12:28:37.607986   13136 command_runner.go:130] ! I0203 12:05:26.205271       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.608026   13136 command_runner.go:130] ! I0203 12:07:57.197252       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m02\" does not exist"
	I0203 12:28:37.608026   13136 command_runner.go:130] ! I0203 12:07:57.213772       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300-m02" podCIDRs=["10.244.1.0/24"]
	I0203 12:28:37.608070   13136 command_runner.go:130] ! I0203 12:07:57.214096       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608070   13136 command_runner.go:130] ! I0203 12:07:57.214387       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608111   13136 command_runner.go:130] ! I0203 12:07:57.243166       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608155   13136 command_runner.go:130] ! I0203 12:07:57.578919       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608155   13136 command_runner.go:130] ! I0203 12:07:58.163164       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608196   13136 command_runner.go:130] ! I0203 12:07:59.655130       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m02"
	I0203 12:28:37.608196   13136 command_runner.go:130] ! I0203 12:07:59.772999       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608240   13136 command_runner.go:130] ! I0203 12:08:07.534314       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608280   13136 command_runner.go:130] ! I0203 12:08:26.797682       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:37.608280   13136 command_runner.go:130] ! I0203 12:08:26.797755       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608399   13136 command_runner.go:130] ! I0203 12:08:26.813836       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:28.192212       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:29.680135       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:30.702586       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:51.029918       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="72.629315ms"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:51.048475       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="16.732326ms"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:51.049169       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="396.601µs"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:51.058159       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="35.9µs"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:51.069790       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="40.1µs"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:53.787260       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="12.580521ms"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:53.787659       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="70.201µs"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:53.939078       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="12.55302ms"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:53.939506       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="33.801µs"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:58.516195       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:09:01.710207       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:30.158978       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m03\" does not exist"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:30.160493       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:30.187436       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300-m03" podCIDRs=["10.244.2.0/24"]
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:30.187486       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:30.187520       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:30.195215       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:30.643712       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:31.194036       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:34.733168       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m03"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:34.818129       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:40.541982       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:59.598308       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:37.608955   13136 command_runner.go:130] ! I0203 12:12:59.598384       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.608955   13136 command_runner.go:130] ! I0203 12:12:59.613509       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:12:59.761059       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:13:01.072377       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:13:02.975699       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:16:00.817386       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:17:16.830447       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:18:09.728117       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:20:44.872410       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:20:44.874163       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:20:44.902212       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:20:50.011997       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:21:07.487830       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:22:48.017949       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:22:48.044428       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:22:52.915959       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:22:58.370520       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:22:58.373994       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m03\" does not exist"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:22:58.409838       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300-m03" podCIDRs=["10.244.3.0/24"]
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:22:58.410167       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! E0203 12:22:58.438530       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-749300-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-749300-m03" podCIDRs=["10.244.4.0/24"]
	I0203 12:28:37.609000   13136 command_runner.go:130] ! E0203 12:22:58.438947       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-749300-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! E0203 12:22:58.439229       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-749300-m03': failed to patch node CIDR: Node \"multinode-749300-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:22:58.439401       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:22:58.444440       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:22:58.960922       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609535   13136 command_runner.go:130] ! I0203 12:22:59.994381       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609535   13136 command_runner.go:130] ! I0203 12:23:08.704715       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609535   13136 command_runner.go:130] ! I0203 12:23:13.216732       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609585   13136 command_runner.go:130] ! I0203 12:23:13.218582       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:37.609585   13136 command_runner.go:130] ! I0203 12:23:13.233034       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609634   13136 command_runner.go:130] ! I0203 12:23:14.968424       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609634   13136 command_runner.go:130] ! I0203 12:23:15.606788       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.609679   13136 command_runner.go:130] ! I0203 12:24:50.048901       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:37.609721   13136 command_runner.go:130] ! I0203 12:24:50.049506       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609721   13136 command_runner.go:130] ! I0203 12:24:50.231618       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609765   13136 command_runner.go:130] ! I0203 12:24:55.449570       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.631157   13136 logs.go:123] Gathering logs for dmesg ...
	I0203 12:28:37.631157   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 12:28:37.652049   13136 command_runner.go:130] > [Feb 3 12:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +0.106774] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +0.023238] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +0.000004] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +0.060292] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +0.024825] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0203 12:28:37.652049   13136 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +6.580601] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +1.325226] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +1.308770] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0203 12:28:37.652049   13136 command_runner.go:130] > [Feb 3 12:26] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0203 12:28:37.652049   13136 command_runner.go:130] > [ +44.595913] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +0.095070] kauditd_printk_skb: 4 callbacks suppressed
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +0.080250] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [Feb 3 12:27] systemd-fstab-generator[1026]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +0.111210] kauditd_printk_skb: 75 callbacks suppressed
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +0.499536] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +0.200113] systemd-fstab-generator[1078]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +0.221690] systemd-fstab-generator[1092]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +2.970290] systemd-fstab-generator[1331]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +0.201836] systemd-fstab-generator[1343]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +0.192903] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +0.251653] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +0.851149] systemd-fstab-generator[1495]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +0.100990] kauditd_printk_skb: 206 callbacks suppressed
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +3.722313] systemd-fstab-generator[1639]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +1.365001] kauditd_printk_skb: 44 callbacks suppressed
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +5.747815] kauditd_printk_skb: 30 callbacks suppressed
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +3.773287] systemd-fstab-generator[2531]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [ +27.270277] kauditd_printk_skb: 70 callbacks suppressed
	I0203 12:28:37.654990   13136 logs.go:123] Gathering logs for coredns [edb5f00f1042] ...
	I0203 12:28:37.655070   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edb5f00f1042"
	I0203 12:28:37.690699   13136 command_runner.go:130] > .:53
	I0203 12:28:37.690737   13136 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3e8130cfa8e96169e54fdb81903f9b4680c96074b93281de316a617894d613269c265db78cbf1be00f04df6f27627d689838921ad115c7f1fadc26b632a43f17
	I0203 12:28:37.690737   13136 command_runner.go:130] > CoreDNS-1.11.3
	I0203 12:28:37.690737   13136 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0203 12:28:37.690737   13136 command_runner.go:130] > [INFO] 127.0.0.1:49536 - 20223 "HINFO IN 8316577845745372206.6425600211286211531. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049207769s
	I0203 12:28:37.691043   13136 logs.go:123] Gathering logs for kube-scheduler [2e43c2ecb4a9] ...
	I0203 12:28:37.691043   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e43c2ecb4a9"
	I0203 12:28:37.718519   13136 command_runner.go:130] ! I0203 12:27:23.141470       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:37.718519   13136 command_runner.go:130] ! W0203 12:27:24.897433       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0203 12:28:37.718519   13136 command_runner.go:130] ! W0203 12:27:24.897513       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:37.718519   13136 command_runner.go:130] ! W0203 12:27:24.897526       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0203 12:28:37.718519   13136 command_runner.go:130] ! W0203 12:27:24.897538       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0203 12:28:37.718519   13136 command_runner.go:130] ! I0203 12:27:25.033204       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0203 12:28:37.718519   13136 command_runner.go:130] ! I0203 12:27:25.033541       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:37.718519   13136 command_runner.go:130] ! I0203 12:27:25.041065       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0203 12:28:37.718519   13136 command_runner.go:130] ! I0203 12:27:25.044977       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:37.718724   13136 command_runner.go:130] ! I0203 12:27:25.045234       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 12:28:37.718724   13136 command_runner.go:130] ! I0203 12:27:25.045638       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:37.718724   13136 command_runner.go:130] ! I0203 12:27:25.146094       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:37.721236   13136 logs.go:123] Gathering logs for kindnet [fab2d9be6b5c] ...
	I0203 12:28:37.721313   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fab2d9be6b5c"
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:13:59.481747       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:13:59.482211       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:13:59.482302       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:09.479387       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:09.479438       1 main.go:301] handling current node
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:09.479457       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:09.479464       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:09.480145       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:09.480233       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:19.488038       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:19.488073       1 main.go:301] handling current node
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:19.488090       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:19.488096       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:19.488279       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:19.488286       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:29.479983       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:29.480097       1 main.go:301] handling current node
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:29.480118       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:29.480126       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:29.480690       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:29.480801       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:39.480046       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:39.480207       1 main.go:301] handling current node
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:39.480229       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:39.480240       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:39.480703       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:39.480794       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:49.479153       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:49.479261       1 main.go:301] handling current node
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:49.479283       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:49.479292       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:49.479491       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:49.479575       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.758786   13136 command_runner.go:130] ! I0203 12:14:59.478982       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.758786   13136 command_runner.go:130] ! I0203 12:14:59.479132       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.758840   13136 command_runner.go:130] ! I0203 12:14:59.479435       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.758840   13136 command_runner.go:130] ! I0203 12:14:59.479519       1 main.go:301] handling current node
	I0203 12:28:37.758840   13136 command_runner.go:130] ! I0203 12:14:59.479535       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.758883   13136 command_runner.go:130] ! I0203 12:14:59.479541       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.758883   13136 command_runner.go:130] ! I0203 12:15:09.479541       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.758932   13136 command_runner.go:130] ! I0203 12:15:09.479593       1 main.go:301] handling current node
	I0203 12:28:37.758932   13136 command_runner.go:130] ! I0203 12:15:09.479613       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.758932   13136 command_runner.go:130] ! I0203 12:15:09.479621       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.758982   13136 command_runner.go:130] ! I0203 12:15:09.480303       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.758982   13136 command_runner.go:130] ! I0203 12:15:09.480382       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.759030   13136 command_runner.go:130] ! I0203 12:15:19.488389       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.759030   13136 command_runner.go:130] ! I0203 12:15:19.488489       1 main.go:301] handling current node
	I0203 12:28:37.759030   13136 command_runner.go:130] ! I0203 12:15:19.488509       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.759073   13136 command_runner.go:130] ! I0203 12:15:19.488517       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.759121   13136 command_runner.go:130] ! I0203 12:15:19.489046       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.759121   13136 command_runner.go:130] ! I0203 12:15:19.489142       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.759169   13136 command_runner.go:130] ! I0203 12:15:29.481025       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.759169   13136 command_runner.go:130] ! I0203 12:15:29.481131       1 main.go:301] handling current node
	I0203 12:28:37.759169   13136 command_runner.go:130] ! I0203 12:15:29.481151       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.759217   13136 command_runner.go:130] ! I0203 12:15:29.481158       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.759217   13136 command_runner.go:130] ! I0203 12:15:29.481350       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.759260   13136 command_runner.go:130] ! I0203 12:15:29.481373       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.759260   13136 command_runner.go:130] ! I0203 12:15:39.487726       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.759260   13136 command_runner.go:130] ! I0203 12:15:39.487893       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.759308   13136 command_runner.go:130] ! I0203 12:15:39.488092       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.759308   13136 command_runner.go:130] ! I0203 12:15:39.488105       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.759358   13136 command_runner.go:130] ! I0203 12:15:39.488232       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.759358   13136 command_runner.go:130] ! I0203 12:15:39.488259       1 main.go:301] handling current node
	I0203 12:28:37.759358   13136 command_runner.go:130] ! I0203 12:15:49.484117       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.759406   13136 command_runner.go:130] ! I0203 12:15:49.484177       1 main.go:301] handling current node
	I0203 12:28:37.759406   13136 command_runner.go:130] ! I0203 12:15:49.484234       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.759449   13136 command_runner.go:130] ! I0203 12:15:49.484314       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.759449   13136 command_runner.go:130] ! I0203 12:15:49.485204       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.759449   13136 command_runner.go:130] ! I0203 12:15:49.485392       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.759497   13136 command_runner.go:130] ! I0203 12:15:59.481092       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.759497   13136 command_runner.go:130] ! I0203 12:15:59.481195       1 main.go:301] handling current node
	I0203 12:28:37.759546   13136 command_runner.go:130] ! I0203 12:15:59.481218       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.759546   13136 command_runner.go:130] ! I0203 12:15:59.481226       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.759546   13136 command_runner.go:130] ! I0203 12:15:59.481484       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.759593   13136 command_runner.go:130] ! I0203 12:15:59.481510       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.759593   13136 command_runner.go:130] ! I0203 12:16:09.480009       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.759636   13136 command_runner.go:130] ! I0203 12:16:09.480236       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.759636   13136 command_runner.go:130] ! I0203 12:16:09.480645       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.759682   13136 command_runner.go:130] ! I0203 12:16:09.480840       1 main.go:301] handling current node
	I0203 12:28:37.759682   13136 command_runner.go:130] ! I0203 12:16:09.480969       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.759730   13136 command_runner.go:130] ! I0203 12:16:09.481255       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.759730   13136 command_runner.go:130] ! I0203 12:16:19.479435       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.759730   13136 command_runner.go:130] ! I0203 12:16:19.479557       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.759777   13136 command_runner.go:130] ! I0203 12:16:19.479760       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.759777   13136 command_runner.go:130] ! I0203 12:16:19.479977       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.759819   13136 command_runner.go:130] ! I0203 12:16:19.480328       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.759819   13136 command_runner.go:130] ! I0203 12:16:19.480522       1 main.go:301] handling current node
	I0203 12:28:37.759819   13136 command_runner.go:130] ! I0203 12:16:29.479113       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.759868   13136 command_runner.go:130] ! I0203 12:16:29.479221       1 main.go:301] handling current node
	I0203 12:28:37.759868   13136 command_runner.go:130] ! I0203 12:16:29.479267       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.759868   13136 command_runner.go:130] ! I0203 12:16:29.479321       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.759918   13136 command_runner.go:130] ! I0203 12:16:29.479572       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.759918   13136 command_runner.go:130] ! I0203 12:16:29.479670       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.759965   13136 command_runner.go:130] ! I0203 12:16:39.484562       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.759965   13136 command_runner.go:130] ! I0203 12:16:39.484671       1 main.go:301] handling current node
	I0203 12:28:37.760008   13136 command_runner.go:130] ! I0203 12:16:39.484693       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.760055   13136 command_runner.go:130] ! I0203 12:16:39.484700       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.760055   13136 command_runner.go:130] ! I0203 12:16:39.485166       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.760055   13136 command_runner.go:130] ! I0203 12:16:39.485259       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.760106   13136 command_runner.go:130] ! I0203 12:16:49.488261       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.760106   13136 command_runner.go:130] ! I0203 12:16:49.488416       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.760106   13136 command_runner.go:130] ! I0203 12:16:49.488709       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.760153   13136 command_runner.go:130] ! I0203 12:16:49.488783       1 main.go:301] handling current node
	I0203 12:28:37.760153   13136 command_runner.go:130] ! I0203 12:16:49.488801       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.760153   13136 command_runner.go:130] ! I0203 12:16:49.488807       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.760195   13136 command_runner.go:130] ! I0203 12:16:59.479138       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.760195   13136 command_runner.go:130] ! I0203 12:16:59.479218       1 main.go:301] handling current node
	I0203 12:28:37.760242   13136 command_runner.go:130] ! I0203 12:16:59.479312       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.760291   13136 command_runner.go:130] ! I0203 12:16:59.479448       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.760291   13136 command_runner.go:130] ! I0203 12:16:59.480031       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.760291   13136 command_runner.go:130] ! I0203 12:16:59.480132       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.760339   13136 command_runner.go:130] ! I0203 12:17:09.479412       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.760339   13136 command_runner.go:130] ! I0203 12:17:09.479454       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.760382   13136 command_runner.go:130] ! I0203 12:17:09.479652       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.760382   13136 command_runner.go:130] ! I0203 12:17:09.479680       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.760430   13136 command_runner.go:130] ! I0203 12:17:09.479774       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.760430   13136 command_runner.go:130] ! I0203 12:17:09.479785       1 main.go:301] handling current node
	I0203 12:28:37.760430   13136 command_runner.go:130] ! I0203 12:17:19.481248       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.760478   13136 command_runner.go:130] ! I0203 12:17:19.481299       1 main.go:301] handling current node
	I0203 12:28:37.760478   13136 command_runner.go:130] ! I0203 12:17:19.481317       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.760478   13136 command_runner.go:130] ! I0203 12:17:19.481324       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.760527   13136 command_runner.go:130] ! I0203 12:17:19.481727       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.760527   13136 command_runner.go:130] ! I0203 12:17:19.481754       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.760571   13136 command_runner.go:130] ! I0203 12:17:29.479244       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.760571   13136 command_runner.go:130] ! I0203 12:17:29.479364       1 main.go:301] handling current node
	I0203 12:28:37.760571   13136 command_runner.go:130] ! I0203 12:17:29.479384       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.760619   13136 command_runner.go:130] ! I0203 12:17:29.479392       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.760619   13136 command_runner.go:130] ! I0203 12:17:29.480340       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.760668   13136 command_runner.go:130] ! I0203 12:17:29.480488       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.760668   13136 command_runner.go:130] ! I0203 12:17:39.486004       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.760668   13136 command_runner.go:130] ! I0203 12:17:39.486109       1 main.go:301] handling current node
	I0203 12:28:37.760715   13136 command_runner.go:130] ! I0203 12:17:39.486129       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.760715   13136 command_runner.go:130] ! I0203 12:17:39.486137       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.760758   13136 command_runner.go:130] ! I0203 12:17:39.487056       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.760758   13136 command_runner.go:130] ! I0203 12:17:39.487145       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.760758   13136 command_runner.go:130] ! I0203 12:17:49.479174       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.760805   13136 command_runner.go:130] ! I0203 12:17:49.479407       1 main.go:301] handling current node
	I0203 12:28:37.760805   13136 command_runner.go:130] ! I0203 12:17:49.479529       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.760805   13136 command_runner.go:130] ! I0203 12:17:49.479564       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.760855   13136 command_runner.go:130] ! I0203 12:17:49.480448       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.760855   13136 command_runner.go:130] ! I0203 12:17:49.480489       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.760903   13136 command_runner.go:130] ! I0203 12:17:59.479178       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.760903   13136 command_runner.go:130] ! I0203 12:17:59.479464       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.760945   13136 command_runner.go:130] ! I0203 12:17:59.479683       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.760945   13136 command_runner.go:130] ! I0203 12:17:59.479843       1 main.go:301] handling current node
	I0203 12:28:37.760993   13136 command_runner.go:130] ! I0203 12:17:59.479900       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.760993   13136 command_runner.go:130] ! I0203 12:17:59.479909       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.760993   13136 command_runner.go:130] ! I0203 12:18:09.479760       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.761041   13136 command_runner.go:130] ! I0203 12:18:09.479855       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.761041   13136 command_runner.go:130] ! I0203 12:18:09.480291       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.761041   13136 command_runner.go:130] ! I0203 12:18:09.480340       1 main.go:301] handling current node
	I0203 12:28:37.761089   13136 command_runner.go:130] ! I0203 12:18:09.480365       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.761089   13136 command_runner.go:130] ! I0203 12:18:09.480374       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.761132   13136 command_runner.go:130] ! I0203 12:18:19.487177       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.761132   13136 command_runner.go:130] ! I0203 12:18:19.487393       1 main.go:301] handling current node
	I0203 12:28:37.761132   13136 command_runner.go:130] ! I0203 12:18:19.487478       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.761180   13136 command_runner.go:130] ! I0203 12:18:19.487578       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.761180   13136 command_runner.go:130] ! I0203 12:18:19.488002       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.761229   13136 command_runner.go:130] ! I0203 12:18:19.488201       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.761229   13136 command_runner.go:130] ! I0203 12:18:29.479665       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.761229   13136 command_runner.go:130] ! I0203 12:18:29.479790       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.761276   13136 command_runner.go:130] ! I0203 12:18:29.480229       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.761276   13136 command_runner.go:130] ! I0203 12:18:29.480333       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.761319   13136 command_runner.go:130] ! I0203 12:18:29.480694       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.761319   13136 command_runner.go:130] ! I0203 12:18:29.480800       1 main.go:301] handling current node
	I0203 12:28:37.761319   13136 command_runner.go:130] ! I0203 12:18:39.478894       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.761366   13136 command_runner.go:130] ! I0203 12:18:39.479048       1 main.go:301] handling current node
	I0203 12:28:37.761366   13136 command_runner.go:130] ! I0203 12:18:39.479069       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.761366   13136 command_runner.go:130] ! I0203 12:18:39.479077       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.761416   13136 command_runner.go:130] ! I0203 12:18:39.479735       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.761416   13136 command_runner.go:130] ! I0203 12:18:39.479846       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.761464   13136 command_runner.go:130] ! I0203 12:18:49.487084       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.761464   13136 command_runner.go:130] ! I0203 12:18:49.487121       1 main.go:301] handling current node
	I0203 12:28:37.761464   13136 command_runner.go:130] ! I0203 12:18:49.487139       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.761507   13136 command_runner.go:130] ! I0203 12:18:49.487146       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.761507   13136 command_runner.go:130] ! I0203 12:18:49.487825       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.761555   13136 command_runner.go:130] ! I0203 12:18:49.488251       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.761555   13136 command_runner.go:130] ! I0203 12:18:59.479844       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.761603   13136 command_runner.go:130] ! I0203 12:18:59.479986       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.761603   13136 command_runner.go:130] ! I0203 12:18:59.480763       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.761603   13136 command_runner.go:130] ! I0203 12:18:59.480852       1 main.go:301] handling current node
	I0203 12:28:37.761650   13136 command_runner.go:130] ! I0203 12:18:59.480911       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.761650   13136 command_runner.go:130] ! I0203 12:18:59.480921       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.761694   13136 command_runner.go:130] ! I0203 12:19:09.479931       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.761694   13136 command_runner.go:130] ! I0203 12:19:09.480043       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.761694   13136 command_runner.go:130] ! I0203 12:19:09.480242       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.761741   13136 command_runner.go:130] ! I0203 12:19:09.480487       1 main.go:301] handling current node
	I0203 12:28:37.761741   13136 command_runner.go:130] ! I0203 12:19:09.480506       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.761790   13136 command_runner.go:130] ! I0203 12:19:09.480516       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.761838   13136 command_runner.go:130] ! I0203 12:19:19.486529       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.761838   13136 command_runner.go:130] ! I0203 12:19:19.486564       1 main.go:301] handling current node
	I0203 12:28:37.761881   13136 command_runner.go:130] ! I0203 12:19:19.486583       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.761881   13136 command_runner.go:130] ! I0203 12:19:19.486590       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.761881   13136 command_runner.go:130] ! I0203 12:19:19.486994       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.761929   13136 command_runner.go:130] ! I0203 12:19:19.487009       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.761929   13136 command_runner.go:130] ! I0203 12:19:29.480898       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.761929   13136 command_runner.go:130] ! I0203 12:19:29.481006       1 main.go:301] handling current node
	I0203 12:28:37.761979   13136 command_runner.go:130] ! I0203 12:19:29.481028       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.761979   13136 command_runner.go:130] ! I0203 12:19:29.481037       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.762027   13136 command_runner.go:130] ! I0203 12:19:29.481233       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.762027   13136 command_runner.go:130] ! I0203 12:19:29.481256       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.762070   13136 command_runner.go:130] ! I0203 12:19:39.486219       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.762070   13136 command_runner.go:130] ! I0203 12:19:39.486253       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.762070   13136 command_runner.go:130] ! I0203 12:19:39.486535       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.762117   13136 command_runner.go:130] ! I0203 12:19:39.486547       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.762117   13136 command_runner.go:130] ! I0203 12:19:39.486661       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.762117   13136 command_runner.go:130] ! I0203 12:19:39.486668       1 main.go:301] handling current node
	I0203 12:28:37.762166   13136 command_runner.go:130] ! I0203 12:19:49.486894       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.762166   13136 command_runner.go:130] ! I0203 12:19:49.487004       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.762213   13136 command_runner.go:130] ! I0203 12:19:49.487855       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.762213   13136 command_runner.go:130] ! I0203 12:19:49.488255       1 main.go:301] handling current node
	I0203 12:28:37.762255   13136 command_runner.go:130] ! I0203 12:19:49.488415       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.762255   13136 command_runner.go:130] ! I0203 12:19:49.488578       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.762255   13136 command_runner.go:130] ! I0203 12:19:59.480029       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.762302   13136 command_runner.go:130] ! I0203 12:19:59.480068       1 main.go:301] handling current node
	I0203 12:28:37.762302   13136 command_runner.go:130] ! I0203 12:19:59.480087       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.762302   13136 command_runner.go:130] ! I0203 12:19:59.480095       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.762352   13136 command_runner.go:130] ! I0203 12:19:59.480976       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.762352   13136 command_runner.go:130] ! I0203 12:19:59.481279       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.762400   13136 command_runner.go:130] ! I0203 12:20:09.480108       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.762400   13136 command_runner.go:130] ! I0203 12:20:09.480217       1 main.go:301] handling current node
	I0203 12:28:37.762400   13136 command_runner.go:130] ! I0203 12:20:09.480237       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.762442   13136 command_runner.go:130] ! I0203 12:20:09.480245       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.762442   13136 command_runner.go:130] ! I0203 12:20:09.480661       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.762442   13136 command_runner.go:130] ! I0203 12:20:09.480744       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.762489   13136 command_runner.go:130] ! I0203 12:20:19.479758       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.762537   13136 command_runner.go:130] ! I0203 12:20:19.480248       1 main.go:301] handling current node
	I0203 12:28:37.762537   13136 command_runner.go:130] ! I0203 12:20:19.480343       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.762537   13136 command_runner.go:130] ! I0203 12:20:19.480356       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.762585   13136 command_runner.go:130] ! I0203 12:20:19.480786       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.762585   13136 command_runner.go:130] ! I0203 12:20:19.480803       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.762631   13136 command_runner.go:130] ! I0203 12:20:29.479490       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.762631   13136 command_runner.go:130] ! I0203 12:20:29.479617       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.762631   13136 command_runner.go:130] ! I0203 12:20:29.480064       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.762679   13136 command_runner.go:130] ! I0203 12:20:29.480169       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.762679   13136 command_runner.go:130] ! I0203 12:20:29.480353       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.762679   13136 command_runner.go:130] ! I0203 12:20:29.480368       1 main.go:301] handling current node
	I0203 12:28:37.762728   13136 command_runner.go:130] ! I0203 12:20:39.479641       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.762728   13136 command_runner.go:130] ! I0203 12:20:39.479836       1 main.go:301] handling current node
	I0203 12:28:37.762776   13136 command_runner.go:130] ! I0203 12:20:39.479918       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.762776   13136 command_runner.go:130] ! I0203 12:20:39.480224       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.762819   13136 command_runner.go:130] ! I0203 12:20:39.480721       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.762819   13136 command_runner.go:130] ! I0203 12:20:39.480751       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.762867   13136 command_runner.go:130] ! I0203 12:20:49.479128       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.762867   13136 command_runner.go:130] ! I0203 12:20:49.479242       1 main.go:301] handling current node
	I0203 12:28:37.762867   13136 command_runner.go:130] ! I0203 12:20:49.479263       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.762915   13136 command_runner.go:130] ! I0203 12:20:49.479271       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.762915   13136 command_runner.go:130] ! I0203 12:20:49.479687       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.762915   13136 command_runner.go:130] ! I0203 12:20:49.479937       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.762964   13136 command_runner.go:130] ! I0203 12:20:59.485967       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.762964   13136 command_runner.go:130] ! I0203 12:20:59.486008       1 main.go:301] handling current node
	I0203 12:28:37.763006   13136 command_runner.go:130] ! I0203 12:20:59.486029       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.763006   13136 command_runner.go:130] ! I0203 12:20:59.486037       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.763053   13136 command_runner.go:130] ! I0203 12:20:59.486327       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763053   13136 command_runner.go:130] ! I0203 12:20:59.486342       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763053   13136 command_runner.go:130] ! I0203 12:21:09.479406       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.763103   13136 command_runner.go:130] ! I0203 12:21:09.479537       1 main.go:301] handling current node
	I0203 12:28:37.763103   13136 command_runner.go:130] ! I0203 12:21:09.479560       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.763103   13136 command_runner.go:130] ! I0203 12:21:09.479571       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.763150   13136 command_runner.go:130] ! I0203 12:21:09.480561       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763150   13136 command_runner.go:130] ! I0203 12:21:09.480668       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763150   13136 command_runner.go:130] ! I0203 12:21:19.486059       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.763192   13136 command_runner.go:130] ! I0203 12:21:19.486172       1 main.go:301] handling current node
	I0203 12:28:37.763192   13136 command_runner.go:130] ! I0203 12:21:19.486192       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.763240   13136 command_runner.go:130] ! I0203 12:21:19.486199       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.763289   13136 command_runner.go:130] ! I0203 12:21:19.486776       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763289   13136 command_runner.go:130] ! I0203 12:21:19.486913       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763289   13136 command_runner.go:130] ! I0203 12:21:29.479291       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.763336   13136 command_runner.go:130] ! I0203 12:21:29.479421       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.763336   13136 command_runner.go:130] ! I0203 12:21:29.480168       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763379   13136 command_runner.go:130] ! I0203 12:21:29.480268       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763379   13136 command_runner.go:130] ! I0203 12:21:29.480621       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.763379   13136 command_runner.go:130] ! I0203 12:21:29.480720       1 main.go:301] handling current node
	I0203 12:28:37.763426   13136 command_runner.go:130] ! I0203 12:21:39.479561       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763426   13136 command_runner.go:130] ! I0203 12:21:39.479684       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763476   13136 command_runner.go:130] ! I0203 12:21:39.480019       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.763476   13136 command_runner.go:130] ! I0203 12:21:39.480130       1 main.go:301] handling current node
	I0203 12:28:37.763476   13136 command_runner.go:130] ! I0203 12:21:39.480149       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.763476   13136 command_runner.go:130] ! I0203 12:21:39.480157       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.763533   13136 command_runner.go:130] ! I0203 12:21:49.485937       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.763533   13136 command_runner.go:130] ! I0203 12:21:49.486015       1 main.go:301] handling current node
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:21:49.486511       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:21:49.486846       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:21:49.487441       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:21:49.487470       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:21:59.479224       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:21:59.479388       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:21:59.479615       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:21:59.479639       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:21:59.479828       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:21:59.479942       1 main.go:301] handling current node
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:09.479352       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:09.479745       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:09.480390       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:09.480426       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:09.480922       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:09.481129       1 main.go:301] handling current node
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:19.480040       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:19.480088       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:19.480938       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:19.480972       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:19.481966       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:19.482194       1 main.go:301] handling current node
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:29.479113       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:29.479222       1 main.go:301] handling current node
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:29.479243       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:29.479251       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:29.479605       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:29.479637       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:39.488770       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:39.488806       1 main.go:301] handling current node
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:39.488823       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:39.488830       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:39.489296       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:39.489449       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:49.479056       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:49.479097       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:49.479550       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764104   13136 command_runner.go:130] ! I0203 12:22:49.479661       1 main.go:301] handling current node
	I0203 12:28:37.764104   13136 command_runner.go:130] ! I0203 12:22:49.479679       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764104   13136 command_runner.go:130] ! I0203 12:22:49.479687       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764145   13136 command_runner.go:130] ! I0203 12:22:59.478931       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764145   13136 command_runner.go:130] ! I0203 12:22:59.479023       1 main.go:301] handling current node
	I0203 12:28:37.764145   13136 command_runner.go:130] ! I0203 12:22:59.479077       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764191   13136 command_runner.go:130] ! I0203 12:22:59.479136       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764191   13136 command_runner.go:130] ! I0203 12:22:59.479510       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:22:59.479604       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:22:59.479991       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.0.54 Flags: [] Table: 0 Realm: 0} 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:09.479836       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:09.479965       1 main.go:301] handling current node
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:09.479985       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:09.479997       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:09.480363       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:09.480514       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:19.480167       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:19.480217       1 main.go:301] handling current node
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:19.480239       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:19.480245       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:19.480628       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:19.480750       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:29.488733       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:29.489234       1 main.go:301] handling current node
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:29.489474       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:29.489946       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:29.490535       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:29.490635       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:39.479240       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:39.479359       1 main.go:301] handling current node
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:39.479382       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:39.479391       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:39.479635       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:39.479662       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:49.484665       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:49.484760       1 main.go:301] handling current node
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:49.484814       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:49.484827       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:49.485522       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:49.485609       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:59.488178       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:59.488328       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:59.488725       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:59.488825       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:59.489199       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:59.489288       1 main.go:301] handling current node
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:24:09.478924       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:24:09.478990       1 main.go:301] handling current node
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:24:09.479043       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:24:09.479072       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:24:09.479342       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:24:09.479511       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:24:19.485161       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764751   13136 command_runner.go:130] ! I0203 12:24:19.485331       1 main.go:301] handling current node
	I0203 12:28:37.764751   13136 command_runner.go:130] ! I0203 12:24:19.485367       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764792   13136 command_runner.go:130] ! I0203 12:24:19.485388       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764792   13136 command_runner.go:130] ! I0203 12:24:19.486434       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764792   13136 command_runner.go:130] ! I0203 12:24:19.486547       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764836   13136 command_runner.go:130] ! I0203 12:24:29.479544       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764836   13136 command_runner.go:130] ! I0203 12:24:29.480058       1 main.go:301] handling current node
	I0203 12:28:37.764836   13136 command_runner.go:130] ! I0203 12:24:29.480294       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:29.480571       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:29.482395       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:29.482495       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:39.487057       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:39.487164       1 main.go:301] handling current node
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:39.487184       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:39.487192       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:39.487371       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:39.487395       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:49.479049       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:49.479126       1 main.go:301] handling current node
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:49.479266       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:49.479354       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:49.480131       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:49.480242       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:59.479305       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:59.479727       1 main.go:301] handling current node
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:59.479826       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:59.479839       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:59.480314       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:59.480509       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.782138   13136 logs.go:123] Gathering logs for container status ...
	I0203 12:28:37.782138   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 12:28:37.845837   13136 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0203 12:28:37.845837   13136 command_runner.go:130] > edb5f00f10420       c69fa2e9cbf5f                                                                                         7 seconds ago        Running             coredns                   1                   ac5f0bf5197cf       coredns-668d6bf9bc-v2gkp
	I0203 12:28:37.845837   13136 command_runner.go:130] > 0ff3e07f2982f       8c811b4aec35f                                                                                         7 seconds ago        Running             busybox                   1                   d290c79ddbf8d       busybox-58667487b6-zgvmd
	I0203 12:28:37.845837   13136 command_runner.go:130] > 7cbc7a552a4c3       6e38f40d628db                                                                                         27 seconds ago       Running             storage-provisioner       2                   1eece224f54eb       storage-provisioner
	I0203 12:28:37.845837   13136 command_runner.go:130] > 644890f5738e5       d300845f67aeb                                                                                         About a minute ago   Running             kindnet-cni               1                   c682ff8834bf4       kindnet-h6m57
	I0203 12:28:37.845837   13136 command_runner.go:130] > edf3d4284acbb       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   1eece224f54eb       storage-provisioner
	I0203 12:28:37.845837   13136 command_runner.go:130] > cf33452e72443       e29f9c7391fd9                                                                                         About a minute ago   Running             kube-proxy                1                   c4912e7d3383e       kube-proxy-9g92t
	I0203 12:28:37.845837   13136 command_runner.go:130] > 09707a8629658       a9e7e6b294baf                                                                                         About a minute ago   Running             etcd                      0                   fc833a943f11f       etcd-multinode-749300
	I0203 12:28:37.845837   13136 command_runner.go:130] > 2e43c2ecb4a92       2b0d6572d062c                                                                                         About a minute ago   Running             kube-scheduler            1                   e2da6b5a5bd1b       kube-scheduler-multinode-749300
	I0203 12:28:37.845837   13136 command_runner.go:130] > fa5ab1df89857       019ee182b58e2                                                                                         About a minute ago   Running             kube-controller-manager   1                   d8732fe7d2435       kube-controller-manager-multinode-749300
	I0203 12:28:37.845837   13136 command_runner.go:130] > 6c19e0a0ba9c0       95c0bda56fc4d                                                                                         About a minute ago   Running             kube-apiserver            0                   264f9c1c2c05f       kube-apiserver-multinode-749300
	I0203 12:28:37.845837   13136 command_runner.go:130] > f42690726d50f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   efcd217a3204d       busybox-58667487b6-zgvmd
	I0203 12:28:37.845837   13136 command_runner.go:130] > fe91a8d012aee       c69fa2e9cbf5f                                                                                         23 minutes ago       Exited              coredns                   0                   26e5557dc32ce       coredns-668d6bf9bc-v2gkp
	I0203 12:28:37.846368   13136 command_runner.go:130] > fab2d9be6b5c7       kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26              23 minutes ago       Exited              kindnet-cni               0                   cb49b32ba0852       kindnet-h6m57
	I0203 12:28:37.846412   13136 command_runner.go:130] > c6dc514e98f69       e29f9c7391fd9                                                                                         23 minutes ago       Exited              kube-proxy                0                   1ff01fa7d8c67       kube-proxy-9g92t
	I0203 12:28:37.846412   13136 command_runner.go:130] > 8ade10c0fb096       019ee182b58e2                                                                                         23 minutes ago       Exited              kube-controller-manager   0                   b1b473818438d       kube-controller-manager-multinode-749300
	I0203 12:28:37.846412   13136 command_runner.go:130] > 88c40ca9aa3cb       2b0d6572d062c                                                                                         23 minutes ago       Exited              kube-scheduler            0                   d8d9e598659ff       kube-scheduler-multinode-749300
	I0203 12:28:40.350062   13136 api_server.go:253] Checking apiserver healthz at https://172.25.12.244:8443/healthz ...
	I0203 12:28:40.358129   13136 api_server.go:279] https://172.25.12.244:8443/healthz returned 200:
	ok
	I0203 12:28:40.358387   13136 round_trippers.go:463] GET https://172.25.12.244:8443/version
	I0203 12:28:40.358387   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:40.358426   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:40.358426   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:40.360856   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:28:40.360856   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:40.360954   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:40.360954   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:40.360954   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:40.360954   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:40.360954   13136 round_trippers.go:580]     Content-Length: 263
	I0203 12:28:40.360954   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:40 GMT
	I0203 12:28:40.360954   13136 round_trippers.go:580]     Audit-Id: fc39d40c-2ddd-4920-8f6d-faabd6c24e11
	I0203 12:28:40.360954   13136 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "32",
	  "gitVersion": "v1.32.1",
	  "gitCommit": "e9c9be4007d1664e68796af02b8978640d2c1b26",
	  "gitTreeState": "clean",
	  "buildDate": "2025-01-15T14:31:55Z",
	  "goVersion": "go1.23.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0203 12:28:40.361062   13136 api_server.go:141] control plane version: v1.32.1
	I0203 12:28:40.361062   13136 api_server.go:131] duration metric: took 3.7242091s to wait for apiserver health ...
	I0203 12:28:40.361062   13136 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 12:28:40.367792   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 12:28:40.398296   13136 command_runner.go:130] > 6c19e0a0ba9c
	I0203 12:28:40.398296   13136 logs.go:282] 1 containers: [6c19e0a0ba9c]
	I0203 12:28:40.406134   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 12:28:40.430191   13136 command_runner.go:130] > 09707a862965
	I0203 12:28:40.430191   13136 logs.go:282] 1 containers: [09707a862965]
	I0203 12:28:40.436999   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 12:28:40.465065   13136 command_runner.go:130] > edb5f00f1042
	I0203 12:28:40.465710   13136 command_runner.go:130] > fe91a8d012ae
	I0203 12:28:40.465710   13136 logs.go:282] 2 containers: [edb5f00f1042 fe91a8d012ae]
	I0203 12:28:40.472612   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 12:28:40.500066   13136 command_runner.go:130] > 2e43c2ecb4a9
	I0203 12:28:40.500098   13136 command_runner.go:130] > 88c40ca9aa3c
	I0203 12:28:40.500134   13136 logs.go:282] 2 containers: [2e43c2ecb4a9 88c40ca9aa3c]
	I0203 12:28:40.507740   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 12:28:40.534077   13136 command_runner.go:130] > cf33452e7244
	I0203 12:28:40.534122   13136 command_runner.go:130] > c6dc514e98f6
	I0203 12:28:40.534122   13136 logs.go:282] 2 containers: [cf33452e7244 c6dc514e98f6]
	I0203 12:28:40.540305   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 12:28:40.564122   13136 command_runner.go:130] > fa5ab1df8985
	I0203 12:28:40.564122   13136 command_runner.go:130] > 8ade10c0fb09
	I0203 12:28:40.564211   13136 logs.go:282] 2 containers: [fa5ab1df8985 8ade10c0fb09]
	I0203 12:28:40.571089   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0203 12:28:40.604629   13136 command_runner.go:130] > 644890f5738e
	I0203 12:28:40.604629   13136 command_runner.go:130] > fab2d9be6b5c
	I0203 12:28:40.606436   13136 logs.go:282] 2 containers: [644890f5738e fab2d9be6b5c]
	I0203 12:28:40.606571   13136 logs.go:123] Gathering logs for kubelet ...
	I0203 12:28:40.606571   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:15 multinode-749300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: I0203 12:27:16.085338    1502 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: I0203 12:27:16.085444    1502 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: I0203 12:27:16.086383    1502 server.go:954] "Client rotation is on, will bootstrap in background"
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: E0203 12:27:16.086828    1502 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: I0203 12:27:16.848200    1552 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: I0203 12:27:16.848394    1552 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: I0203 12:27:16.848741    1552 server.go:954] "Client rotation is on, will bootstrap in background"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: E0203 12:27:16.848794    1552 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:17 multinode-749300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.655843    1646 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.655920    1646 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.656491    1646 server.go:954] "Client rotation is on, will bootstrap in background"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.660314    1646 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.685411    1646 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.712367    1646 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.712421    1646 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.719067    1646 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.719190    1646 server.go:841] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720010    1646 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720060    1646 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-749300","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720250    1646 topology_manager.go:138] "Creating topology manager with none policy"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720261    1646 container_manager_linux.go:304] "Creating device plugin manager"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720394    1646 state_mem.go:36] "Initialized new in-memory state store"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722746    1646 kubelet.go:446] "Attempting to sync node with API server"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722858    1646 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722878    1646 kubelet.go:352] "Adding apiserver pod source"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722889    1646 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.728476    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.728558    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.730384    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.730414    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.730516    1646 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="docker" version="27.4.0" apiVersion="v1"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.732095    1646 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.732504    1646 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.737572    1646 watchdog_linux.go:99] "Systemd watchdog is not enabled"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.737778    1646 server.go:1287] "Started kubelet"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.742490    1646 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.747263    1646 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.25.12.244:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-749300.1820b26d8c29f858  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-749300,UID:multinode-749300,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-749300,},FirstTimestamp:2025-02-03 12:27:19.73775164 +0000 UTC m=+0.175845113,LastTimestamp:2025-02-03 12:27:19.73775164 +0000 UTC m=+0.175845113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-7493
00,}"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.753450    1646 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.755438    1646 server.go:490] "Adding debug handlers to kubelet server"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.757330    1646 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.759063    1646 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.759618    1646 volume_manager.go:297] "Starting Kubelet Volume Manager"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.760084    1646 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.760301    1646 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-749300\" not found"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.763820    1646 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.766190    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="200ms"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.775750    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.775896    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.776304    1646 factory.go:221] Registration of the systemd container factory successfully
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.776423    1646 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.776477    1646 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.822393    1646 cpu_manager.go:221] "Starting CPU manager" policy="none"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.822414    1646 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.822433    1646 state_mem.go:36] "Initialized new in-memory state store"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823729    1646 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823782    1646 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823807    1646 policy_none.go:49] "None policy: Start"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823820    1646 memory_manager.go:186] "Starting memorymanager" policy="None"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823833    1646 state_mem.go:35] "Initializing new in-memory state store"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.824575    1646 state_mem.go:75] "Updated machine memory state"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.827550    1646 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.828214    1646 eviction_manager.go:189] "Eviction manager: starting control loop"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.828323    1646 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.834439    1646 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.836223    1646 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.836276    1646 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-749300\" not found"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.839763    1646 reconciler.go:26] "Reconciler: start to sync state"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.849152    1646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.851786    1646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.851873    1646 status_manager.go:227] "Starting to sync pod status with apiserver"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.852167    1646 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.852266    1646 kubelet.go:2388] "Starting kubelet main sync loop"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.852425    1646 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.857733    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.857872    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.865017    1646 iptables.go:577] "Could not set up iptables canary" err=<
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.930098    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.931495    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.959594    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.959988    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ff01fa7d8c67a792cac128e6be46aba4b9713e4a6cd005178a2573c7a847c7a"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965523    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1b473818438dbd2e6a91783e24fae500384dbe88b88a3ed9dd8d9c8f4724a7a"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965561    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16d03cfd685dc52d880c67a5a5040dfd6dcf7d2477c368b0b221099fe19d0fc3"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965576    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8d9e598659ff21f0255dbdf0fe1e487760842b470492b0b4377fb2491bf3f17"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965587    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3c93fcfaa46c30cca46747853d168923992fa34e3ab48bd74f55818221180a9"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.966435    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.969099    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="400ms"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.969271    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efcd217a3204d8ee4b03ebb412109a32b1b008fc65b7434e2087e8fa5429c03b"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.994181    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26e5557dc32ce42e41eb095169017d71cd452b2e90ecede8972ab6dfa8c841ac"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.008325    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a166f3c8776d2abb8f173e76ba48d9aa5c71b04d34638145a7d22b947e0b1e16"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.024782    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb49b32ba0852c35cd9bd014b8dc9ccfc93a2c6a7d911bdd6baaba575c4e1d80"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.026552    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.027031    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046040    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-kubeconfig\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046195    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046258    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a4dc8a8db691940bb17375ec22c0921e-kubeconfig\") pod \"kube-scheduler-multinode-749300\" (UID: \"a4dc8a8db691940bb17375ec22c0921e\") " pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046319    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/f85eb916773a482447e41aa40aaff233-etcd-certs\") pod \"etcd-multinode-749300\" (UID: \"f85eb916773a482447e41aa40aaff233\") " pod="kube-system/etcd-multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046369    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20275825c8d44051c01f8d920b297acd-ca-certs\") pod \"kube-apiserver-multinode-749300\" (UID: \"20275825c8d44051c01f8d920b297acd\") " pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046389    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20275825c8d44051c01f8d920b297acd-k8s-certs\") pod \"kube-apiserver-multinode-749300\" (UID: \"20275825c8d44051c01f8d920b297acd\") " pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046407    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20275825c8d44051c01f8d920b297acd-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-749300\" (UID: \"20275825c8d44051c01f8d920b297acd\") " pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046425    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-ca-certs\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046445    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/f85eb916773a482447e41aa40aaff233-etcd-data\") pod \"etcd-multinode-749300\" (UID: \"f85eb916773a482447e41aa40aaff233\") " pod="kube-system/etcd-multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046466    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-flexvolume-dir\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046483    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-k8s-certs\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.134568    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.136458    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.371298    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="800ms"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.537888    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.538850    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: W0203 12:27:20.642530    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.642673    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: W0203 12:27:20.718728    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.718775    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: W0203 12:27:20.727487    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.727666    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: I0203 12:27:21.096615    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2da6b5a5bd1b22ed0d0ef9ab7fd9a0874f1357443511e898b07fbae5f28d3d0"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: I0203 12:27:21.117402    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc833a943f11f228aa4ef7daceca6bf4fd4096e22ee6354cc8afb177b0dc3db5"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: E0203 12:27:21.172766    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="1.6s"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: W0203 12:27:21.239099    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: E0203 12:27:21.239402    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: I0203 12:27:21.341008    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: E0203 12:27:21.342386    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.155943    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.168589    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.184520    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.192380    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: I0203 12:27:22.944384    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.220031    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.221067    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.221592    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.222217    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: E0203 12:27:24.222471    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: E0203 12:27:24.222938    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: E0203 12:27:24.223334    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: I0203 12:27:24.962104    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.072863    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-multinode-749300\" already exists" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.072916    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.096600    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-multinode-749300\" already exists" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.096649    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.100835    1646 kubelet_node_status.go:125] "Node was previously registered" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.101001    1646 kubelet_node_status.go:79] "Successfully registered node" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.101046    1646 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.102196    1646 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.103579    1646 setters.go:602] "Node became not ready" node="multinode-749300" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-03T12:27:25Z","lastTransitionTime":"2025-02-03T12:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.123635    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-multinode-749300\" already exists" pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.123696    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.143136    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-749300\" already exists" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.231645    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.250920    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-749300\" already exists" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.733100    1646 apiserver.go:52] "Watching apiserver"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.740335    1646 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-749300" podUID="b18ba461-b225-4090-8341-159171502b52"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.740880    1646 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-749300" podUID="c751851c-68ee-4c15-80ca-32642fcf2a5a"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.741767    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.743201    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.768020    1646 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.798228    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67c155d5-fb9b-42f5-8e64-865c44a5d4e6-xtables-lock\") pod \"kindnet-h6m57\" (UID: \"67c155d5-fb9b-42f5-8e64-865c44a5d4e6\") " pod="kube-system/kindnet-h6m57"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799102    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4c991afa-7bb0-4d52-bded-22d68037b5ae-tmp\") pod \"storage-provisioner\" (UID: \"4c991afa-7bb0-4d52-bded-22d68037b5ae\") " pod="kube-system/storage-provisioner"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799171    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1709b874-4fee-41f5-8d30-24912b2fa725-xtables-lock\") pod \"kube-proxy-9g92t\" (UID: \"1709b874-4fee-41f5-8d30-24912b2fa725\") " pod="kube-system/kube-proxy-9g92t"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799205    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1709b874-4fee-41f5-8d30-24912b2fa725-lib-modules\") pod \"kube-proxy-9g92t\" (UID: \"1709b874-4fee-41f5-8d30-24912b2fa725\") " pod="kube-system/kube-proxy-9g92t"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799246    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/67c155d5-fb9b-42f5-8e64-865c44a5d4e6-cni-cfg\") pod \"kindnet-h6m57\" (UID: \"67c155d5-fb9b-42f5-8e64-865c44a5d4e6\") " pod="kube-system/kindnet-h6m57"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799264    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67c155d5-fb9b-42f5-8e64-865c44a5d4e6-lib-modules\") pod \"kindnet-h6m57\" (UID: \"67c155d5-fb9b-42f5-8e64-865c44a5d4e6\") " pod="kube-system/kindnet-h6m57"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799337    1646 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799426    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.799386    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.800808    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:26.300655438 +0000 UTC m=+6.738748911 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.812299    1646 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.812369    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.843057    1646 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.862699    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.862730    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.862793    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:26.362774296 +0000 UTC m=+6.800867869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.898492    1646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8703dd831250f30e213efd5fca131d7" path="/var/lib/kubelet/pods/a8703dd831250f30e213efd5fca131d7/volumes"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.899802    1646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cea8016677ee73c66077ce584fb15354" path="/var/lib/kubelet/pods/cea8016677ee73c66077ce584fb15354/volumes"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.952875    1646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-749300" podStartSLOduration=0.952857614 podStartE2EDuration="952.857614ms" podCreationTimestamp="2025-02-03 12:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-03 12:27:25.937443526 +0000 UTC m=+6.375537099" watchObservedRunningTime="2025-02-03 12:27:25.952857614 +0000 UTC m=+6.390951187"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.974229    1646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-749300" podStartSLOduration=0.974210637 podStartE2EDuration="974.210637ms" podCreationTimestamp="2025-02-03 12:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-03 12:27:25.953477018 +0000 UTC m=+6.391570591" watchObservedRunningTime="2025-02-03 12:27:25.974210637 +0000 UTC m=+6.412304110"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.303818    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.303893    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:27.303876335 +0000 UTC m=+7.741969908 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.405407    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.405530    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.405596    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:27.40557752 +0000 UTC m=+7.843670993 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.315813    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.317831    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:29.317806871 +0000 UTC m=+9.755900344 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.416628    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.416661    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.416713    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:29.41669654 +0000 UTC m=+9.854790013 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.861806    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.862570    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.336385    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.336563    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:33.336541991 +0000 UTC m=+13.774635464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.437576    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.437923    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.438074    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:33.438050975 +0000 UTC m=+13.876144448 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.853969    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.853720    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:31 multinode-749300 kubelet[1646]: E0203 12:27:31.852706    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:31 multinode-749300 kubelet[1646]: E0203 12:27:31.853391    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.369187    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.369409    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:41.369390703 +0000 UTC m=+21.807484276 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.470103    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.470221    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.470291    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:41.470271952 +0000 UTC m=+21.908365425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.853533    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.854435    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:35 multinode-749300 kubelet[1646]: E0203 12:27:35.853643    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:35 multinode-749300 kubelet[1646]: E0203 12:27:35.854148    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:37 multinode-749300 kubelet[1646]: E0203 12:27:37.852924    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:37 multinode-749300 kubelet[1646]: E0203 12:27:37.853434    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:39 multinode-749300 kubelet[1646]: E0203 12:27:39.861767    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:39 multinode-749300 kubelet[1646]: E0203 12:27:39.862616    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.448061    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.448222    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:57.44820293 +0000 UTC m=+37.886296403 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.549425    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.549465    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.549520    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:57.549504632 +0000 UTC m=+37.987598205 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.852817    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.853419    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:43 multinode-749300 kubelet[1646]: E0203 12:27:43.853585    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:43 multinode-749300 kubelet[1646]: E0203 12:27:43.854245    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:45 multinode-749300 kubelet[1646]: E0203 12:27:45.853520    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:45 multinode-749300 kubelet[1646]: E0203 12:27:45.857915    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:47 multinode-749300 kubelet[1646]: E0203 12:27:47.853864    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:47 multinode-749300 kubelet[1646]: E0203 12:27:47.854661    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:49 multinode-749300 kubelet[1646]: E0203 12:27:49.854481    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:49 multinode-749300 kubelet[1646]: E0203 12:27:49.855863    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:51 multinode-749300 kubelet[1646]: E0203 12:27:51.853472    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:51 multinode-749300 kubelet[1646]: E0203 12:27:51.854452    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:53 multinode-749300 kubelet[1646]: E0203 12:27:53.859668    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:53 multinode-749300 kubelet[1646]: E0203 12:27:53.860055    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:55 multinode-749300 kubelet[1646]: E0203 12:27:55.853633    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:55 multinode-749300 kubelet[1646]: E0203 12:27:55.854320    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.494848    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.494935    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:28:29.494917969 +0000 UTC m=+69.933011442 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.595875    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.595906    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.595961    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:28:29.595942441 +0000 UTC m=+70.034036014 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.853654    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.854513    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: I0203 12:27:57.906113    1646 scope.go:117] "RemoveContainer" containerID="a6484d4fc4d7f6ee26b1c4c1afc10f9bfba5b7f80f2181e9727f163daaf58ce6"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: I0203 12:27:57.907138    1646 scope.go:117] "RemoveContainer" containerID="edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.910890    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(4c991afa-7bb0-4d52-bded-22d68037b5ae)\"" pod="kube-system/storage-provisioner" podUID="4c991afa-7bb0-4d52-bded-22d68037b5ae"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:27:59 multinode-749300 kubelet[1646]: E0203 12:27:59.855276    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:27:59 multinode-749300 kubelet[1646]: E0203 12:27:59.856164    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:01 multinode-749300 kubelet[1646]: E0203 12:28:01.853743    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:01 multinode-749300 kubelet[1646]: E0203 12:28:01.854049    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:03 multinode-749300 kubelet[1646]: E0203 12:28:03.853330    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:03 multinode-749300 kubelet[1646]: E0203 12:28:03.853968    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:05 multinode-749300 kubelet[1646]: E0203 12:28:05.853538    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:05 multinode-749300 kubelet[1646]: E0203 12:28:05.854181    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:07 multinode-749300 kubelet[1646]: E0203 12:28:07.853789    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:07 multinode-749300 kubelet[1646]: E0203 12:28:07.854093    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:09 multinode-749300 kubelet[1646]: E0203 12:28:09.860674    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:09 multinode-749300 kubelet[1646]: E0203 12:28:09.861267    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:10 multinode-749300 kubelet[1646]: I0203 12:28:10.015143    1646 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:10 multinode-749300 kubelet[1646]: I0203 12:28:10.852780    1646 scope.go:117] "RemoveContainer" containerID="edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]: I0203 12:28:19.875787    1646 scope.go:117] "RemoveContainer" containerID="ebc67da1b9e9ac10747758e3a934f19f5572ae8668d2a69f7d6ee1682387d02a"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]: E0203 12:28:19.883953    1646 iptables.go:577] "Could not set up iptables canary" err=<
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]: I0203 12:28:19.923723    1646 scope.go:117] "RemoveContainer" containerID="e3efb81aa459abda7cc19b8607aa9d2bc56a837cc325e672683ffa4a9d05876b"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 kubelet[1646]: I0203 12:28:30.439871    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d290c79ddbf8dbaaae0ac6ae29ff1695c351eb244341bb86dfa66bd51e407af5"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 kubelet[1646]: I0203 12:28:30.451444    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac5f0bf5197cf2f2f9c600a6d9f77ea7775ba4c80a3a3c30272ea8dc42d9f4e2"
	I0203 12:28:40.690041   13136 logs.go:123] Gathering logs for describe nodes ...
	I0203 12:28:40.690041   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0203 12:28:40.882462   13136 command_runner.go:130] > Name:               multinode-749300
	I0203 12:28:40.882512   13136 command_runner.go:130] > Roles:              control-plane
	I0203 12:28:40.882512   13136 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0203 12:28:40.882512   13136 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0203 12:28:40.882567   13136 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0203 12:28:40.882567   13136 command_runner.go:130] >                     kubernetes.io/hostname=multinode-749300
	I0203 12:28:40.882636   13136 command_runner.go:130] >                     kubernetes.io/os=linux
	I0203 12:28:40.882666   13136 command_runner.go:130] >                     minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	I0203 12:28:40.882689   13136 command_runner.go:130] >                     minikube.k8s.io/name=multinode-749300
	I0203 12:28:40.882743   13136 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0203 12:28:40.882767   13136 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_03T12_04_56_0700
	I0203 12:28:40.882806   13136 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0203 12:28:40.882806   13136 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0203 12:28:40.882861   13136 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0203 12:28:40.882861   13136 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0203 12:28:40.882917   13136 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0203 12:28:40.882917   13136 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0203 12:28:40.882917   13136 command_runner.go:130] > CreationTimestamp:  Mon, 03 Feb 2025 12:04:52 +0000
	I0203 12:28:40.882980   13136 command_runner.go:130] > Taints:             <none>
	I0203 12:28:40.882980   13136 command_runner.go:130] > Unschedulable:      false
	I0203 12:28:40.882980   13136 command_runner.go:130] > Lease:
	I0203 12:28:40.882980   13136 command_runner.go:130] >   HolderIdentity:  multinode-749300
	I0203 12:28:40.883046   13136 command_runner.go:130] >   AcquireTime:     <unset>
	I0203 12:28:40.883046   13136 command_runner.go:130] >   RenewTime:       Mon, 03 Feb 2025 12:28:35 +0000
	I0203 12:28:40.883046   13136 command_runner.go:130] > Conditions:
	I0203 12:28:40.883118   13136 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0203 12:28:40.883118   13136 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0203 12:28:40.883174   13136 command_runner.go:130] >   MemoryPressure   False   Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0203 12:28:40.883174   13136 command_runner.go:130] >   DiskPressure     False   Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0203 12:28:40.883233   13136 command_runner.go:130] >   PIDPressure      False   Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0203 12:28:40.883233   13136 command_runner.go:130] >   Ready            True    Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:28:10 +0000   KubeletReady                 kubelet is posting ready status
	I0203 12:28:40.883289   13136 command_runner.go:130] > Addresses:
	I0203 12:28:40.883289   13136 command_runner.go:130] >   InternalIP:  172.25.12.244
	I0203 12:28:40.883289   13136 command_runner.go:130] >   Hostname:    multinode-749300
	I0203 12:28:40.883361   13136 command_runner.go:130] > Capacity:
	I0203 12:28:40.883361   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:40.883418   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:40.883418   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:40.883418   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:40.883418   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:40.883480   13136 command_runner.go:130] > Allocatable:
	I0203 12:28:40.883480   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:40.883480   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:40.883536   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:40.883536   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:40.883536   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:40.883536   13136 command_runner.go:130] > System Info:
	I0203 12:28:40.883536   13136 command_runner.go:130] >   Machine ID:                 aa9fbed762e844a2902d570b7040a1f0
	I0203 12:28:40.883536   13136 command_runner.go:130] >   System UUID:                69ffc0f0-a1d7-9e4e-97f3-ed54041f4203
	I0203 12:28:40.883617   13136 command_runner.go:130] >   Boot ID:                    d8bb3b39-ca1e-4113-9882-57d63502f9b2
	I0203 12:28:40.883617   13136 command_runner.go:130] >   Kernel Version:             5.10.207
	I0203 12:28:40.883676   13136 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0203 12:28:40.883676   13136 command_runner.go:130] >   Operating System:           linux
	I0203 12:28:40.883676   13136 command_runner.go:130] >   Architecture:               amd64
	I0203 12:28:40.883676   13136 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0203 12:28:40.883738   13136 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0203 12:28:40.883738   13136 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0203 12:28:40.883795   13136 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0203 12:28:40.883795   13136 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0203 12:28:40.883795   13136 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0203 12:28:40.883866   13136 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0203 12:28:40.883866   13136 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0203 12:28:40.883905   13136 command_runner.go:130] >   default                     busybox-58667487b6-zgvmd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0203 12:28:40.883905   13136 command_runner.go:130] >   kube-system                 coredns-668d6bf9bc-v2gkp                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0203 12:28:40.884002   13136 command_runner.go:130] >   kube-system                 etcd-multinode-749300                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         75s
	I0203 12:28:40.884002   13136 command_runner.go:130] >   kube-system                 kindnet-h6m57                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0203 12:28:40.884074   13136 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-749300             250m (12%)    0 (0%)      0 (0%)           0 (0%)         75s
	I0203 12:28:40.884132   13136 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-749300    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:40.884132   13136 command_runner.go:130] >   kube-system                 kube-proxy-9g92t                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:40.884132   13136 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-749300             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:40.884235   13136 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:40.884235   13136 command_runner.go:130] > Allocated resources:
	I0203 12:28:40.884235   13136 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0203 12:28:40.884307   13136 command_runner.go:130] >   Resource           Requests     Limits
	I0203 12:28:40.884307   13136 command_runner.go:130] >   --------           --------     ------
	I0203 12:28:40.884362   13136 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0203 12:28:40.884362   13136 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0203 12:28:40.884362   13136 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0203 12:28:40.884362   13136 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0203 12:28:40.884362   13136 command_runner.go:130] > Events:
	I0203 12:28:40.884464   13136 command_runner.go:130] >   Type     Reason                   Age                From             Message
	I0203 12:28:40.884464   13136 command_runner.go:130] >   ----     ------                   ----               ----             -------
	I0203 12:28:40.884464   13136 command_runner.go:130] >   Normal   Starting                 23m                kube-proxy       
	I0203 12:28:40.884535   13136 command_runner.go:130] >   Normal   Starting                 72s                kube-proxy       
	I0203 12:28:40.884590   13136 command_runner.go:130] >   Normal   Starting                 23m                kubelet          Starting kubelet.
	I0203 12:28:40.884590   13136 command_runner.go:130] >   Normal   NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	I0203 12:28:40.884590   13136 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	I0203 12:28:40.884692   13136 command_runner.go:130] >   Normal   NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	I0203 12:28:40.884692   13136 command_runner.go:130] >   Normal   NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:40.884692   13136 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    23m                kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   NodeHasSufficientMemory  23m                kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   NodeHasSufficientPID     23m                kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   Starting                 23m                kubelet          Starting kubelet.
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   RegisteredNode           23m                node-controller  Node multinode-749300 event: Registered Node multinode-749300 in Controller
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   NodeReady                23m                kubelet          Node multinode-749300 status is now: NodeReady
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   Starting                 81s                kubelet          Starting kubelet.
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   NodeHasSufficientMemory  81s (x8 over 81s)  kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    81s (x8 over 81s)  kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   NodeHasSufficientPID     81s (x7 over 81s)  kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   NodeAllocatableEnforced  81s                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Warning  Rebooted                 75s                kubelet          Node multinode-749300 has been rebooted, boot id: d8bb3b39-ca1e-4113-9882-57d63502f9b2
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   RegisteredNode           72s                node-controller  Node multinode-749300 event: Registered Node multinode-749300 in Controller
	I0203 12:28:40.884763   13136 command_runner.go:130] > Name:               multinode-749300-m02
	I0203 12:28:40.884763   13136 command_runner.go:130] > Roles:              <none>
	I0203 12:28:40.884763   13136 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     kubernetes.io/hostname=multinode-749300-m02
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     kubernetes.io/os=linux
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     minikube.k8s.io/name=multinode-749300
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_03T12_07_57_0700
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0203 12:28:40.884763   13136 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0203 12:28:40.884763   13136 command_runner.go:130] > CreationTimestamp:  Mon, 03 Feb 2025 12:07:57 +0000
	I0203 12:28:40.884763   13136 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0203 12:28:40.884763   13136 command_runner.go:130] > Unschedulable:      false
	I0203 12:28:40.884763   13136 command_runner.go:130] > Lease:
	I0203 12:28:40.884763   13136 command_runner.go:130] >   HolderIdentity:  multinode-749300-m02
	I0203 12:28:40.884763   13136 command_runner.go:130] >   AcquireTime:     <unset>
	I0203 12:28:40.884763   13136 command_runner.go:130] >   RenewTime:       Mon, 03 Feb 2025 12:24:25 +0000
	I0203 12:28:40.884763   13136 command_runner.go:130] > Conditions:
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0203 12:28:40.884763   13136 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0203 12:28:40.884763   13136 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:40.884763   13136 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:40.885304   13136 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:40.885304   13136 command_runner.go:130] >   Ready            Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:40.885304   13136 command_runner.go:130] > Addresses:
	I0203 12:28:40.885304   13136 command_runner.go:130] >   InternalIP:  172.25.8.35
	I0203 12:28:40.885419   13136 command_runner.go:130] >   Hostname:    multinode-749300-m02
	I0203 12:28:40.885419   13136 command_runner.go:130] > Capacity:
	I0203 12:28:40.885419   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:40.885491   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:40.885491   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:40.885530   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:40.885530   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:40.885530   13136 command_runner.go:130] > Allocatable:
	I0203 12:28:40.885530   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:40.885623   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:40.885623   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:40.885623   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:40.885623   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:40.885623   13136 command_runner.go:130] > System Info:
	I0203 12:28:40.885695   13136 command_runner.go:130] >   Machine ID:                 90c62936ba5d4d0aaeb17fe1abbb7ffd
	I0203 12:28:40.885750   13136 command_runner.go:130] >   System UUID:                4e05b2a5-08ff-3741-b04f-b8bc068a3e3b
	I0203 12:28:40.885750   13136 command_runner.go:130] >   Boot ID:                    4aec9dc0-92f8-4c4d-b16a-206948ca045d
	I0203 12:28:40.885750   13136 command_runner.go:130] >   Kernel Version:             5.10.207
	I0203 12:28:40.885750   13136 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0203 12:28:40.885854   13136 command_runner.go:130] >   Operating System:           linux
	I0203 12:28:40.885854   13136 command_runner.go:130] >   Architecture:               amd64
	I0203 12:28:40.885854   13136 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0203 12:28:40.885929   13136 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0203 12:28:40.885929   13136 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0203 12:28:40.885929   13136 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0203 12:28:40.885986   13136 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0203 12:28:40.885986   13136 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0203 12:28:40.885986   13136 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0203 12:28:40.885986   13136 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0203 12:28:40.886095   13136 command_runner.go:130] >   default                     busybox-58667487b6-c66bf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0203 12:28:40.886095   13136 command_runner.go:130] >   kube-system                 kindnet-dc9wq               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0203 12:28:40.886166   13136 command_runner.go:130] >   kube-system                 kube-proxy-ggnq7            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0203 12:28:40.886166   13136 command_runner.go:130] > Allocated resources:
	I0203 12:28:40.886221   13136 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0203 12:28:40.886221   13136 command_runner.go:130] >   Resource           Requests   Limits
	I0203 12:28:40.886221   13136 command_runner.go:130] >   --------           --------   ------
	I0203 12:28:40.886221   13136 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0203 12:28:40.886323   13136 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0203 12:28:40.886323   13136 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0203 12:28:40.886323   13136 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0203 12:28:40.886323   13136 command_runner.go:130] > Events:
	I0203 12:28:40.886394   13136 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0203 12:28:40.886394   13136 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0203 12:28:40.886449   13136 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0203 12:28:40.886449   13136 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-749300-m02 status is now: NodeHasSufficientMemory
	I0203 12:28:40.886449   13136 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-749300-m02 status is now: NodeHasNoDiskPressure
	I0203 12:28:40.886569   13136 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-749300-m02 status is now: NodeHasSufficientPID
	I0203 12:28:40.886569   13136 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:40.886569   13136 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-749300-m02 event: Registered Node multinode-749300-m02 in Controller
	I0203 12:28:40.886640   13136 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-749300-m02 status is now: NodeReady
	I0203 12:28:40.886696   13136 command_runner.go:130] >   Normal  RegisteredNode           72s                node-controller  Node multinode-749300-m02 event: Registered Node multinode-749300-m02 in Controller
	I0203 12:28:40.886696   13136 command_runner.go:130] >   Normal  NodeNotReady             22s                node-controller  Node multinode-749300-m02 status is now: NodeNotReady
	I0203 12:28:40.886696   13136 command_runner.go:130] > Name:               multinode-749300-m03
	I0203 12:28:40.886696   13136 command_runner.go:130] > Roles:              <none>
	I0203 12:28:40.886696   13136 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0203 12:28:40.886800   13136 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0203 12:28:40.886800   13136 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0203 12:28:40.886800   13136 command_runner.go:130] >                     kubernetes.io/hostname=multinode-749300-m03
	I0203 12:28:40.886874   13136 command_runner.go:130] >                     kubernetes.io/os=linux
	I0203 12:28:40.886874   13136 command_runner.go:130] >                     minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	I0203 12:28:40.886932   13136 command_runner.go:130] >                     minikube.k8s.io/name=multinode-749300
	I0203 12:28:40.886932   13136 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0203 12:28:40.886932   13136 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_03T12_22_58_0700
	I0203 12:28:40.886932   13136 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0203 12:28:40.886932   13136 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0203 12:28:40.887034   13136 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0203 12:28:40.887034   13136 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0203 12:28:40.887034   13136 command_runner.go:130] > CreationTimestamp:  Mon, 03 Feb 2025 12:22:58 +0000
	I0203 12:28:40.887105   13136 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0203 12:28:40.887160   13136 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0203 12:28:40.887160   13136 command_runner.go:130] > Unschedulable:      false
	I0203 12:28:40.887160   13136 command_runner.go:130] > Lease:
	I0203 12:28:40.887160   13136 command_runner.go:130] >   HolderIdentity:  multinode-749300-m03
	I0203 12:28:40.887160   13136 command_runner.go:130] >   AcquireTime:     <unset>
	I0203 12:28:40.887160   13136 command_runner.go:130] >   RenewTime:       Mon, 03 Feb 2025 12:23:59 +0000
	I0203 12:28:40.887261   13136 command_runner.go:130] > Conditions:
	I0203 12:28:40.887261   13136 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0203 12:28:40.887333   13136 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0203 12:28:40.887388   13136 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:40.887388   13136 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:40.887388   13136 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:40.887492   13136 command_runner.go:130] >   Ready            Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:40.887492   13136 command_runner.go:130] > Addresses:
	I0203 12:28:40.887492   13136 command_runner.go:130] >   InternalIP:  172.25.0.54
	I0203 12:28:40.887492   13136 command_runner.go:130] >   Hostname:    multinode-749300-m03
	I0203 12:28:40.887597   13136 command_runner.go:130] > Capacity:
	I0203 12:28:40.887597   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:40.887597   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:40.887597   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:40.887597   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:40.887597   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:40.887697   13136 command_runner.go:130] > Allocatable:
	I0203 12:28:40.887697   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:40.887697   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:40.887697   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:40.887697   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:40.887769   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:40.887809   13136 command_runner.go:130] > System Info:
	I0203 12:28:40.887809   13136 command_runner.go:130] >   Machine ID:                 38d40ad4379a4ec5b47dd7ccdbdcfdd3
	I0203 12:28:40.887809   13136 command_runner.go:130] >   System UUID:                605d710b-5b92-ec4e-8d85-0f6c10e8d37a
	I0203 12:28:40.887809   13136 command_runner.go:130] >   Boot ID:                    13f88b1f-ea06-4747-bc4f-774ad0edb09f
	I0203 12:28:40.887896   13136 command_runner.go:130] >   Kernel Version:             5.10.207
	I0203 12:28:40.887896   13136 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0203 12:28:40.887896   13136 command_runner.go:130] >   Operating System:           linux
	I0203 12:28:40.887896   13136 command_runner.go:130] >   Architecture:               amd64
	I0203 12:28:40.887968   13136 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0203 12:28:40.887968   13136 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0203 12:28:40.888026   13136 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0203 12:28:40.888026   13136 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0203 12:28:40.888026   13136 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0203 12:28:40.888026   13136 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0203 12:28:40.888133   13136 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0203 12:28:40.888133   13136 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0203 12:28:40.888133   13136 command_runner.go:130] >   kube-system                 kindnet-bckxx       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0203 12:28:40.888264   13136 command_runner.go:130] >   kube-system                 kube-proxy-w8wrd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0203 12:28:40.888264   13136 command_runner.go:130] > Allocated resources:
	I0203 12:28:40.888264   13136 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0203 12:28:40.888264   13136 command_runner.go:130] >   Resource           Requests   Limits
	I0203 12:28:40.888365   13136 command_runner.go:130] >   --------           --------   ------
	I0203 12:28:40.888365   13136 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0203 12:28:40.888365   13136 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0203 12:28:40.888365   13136 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0203 12:28:40.888438   13136 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0203 12:28:40.888438   13136 command_runner.go:130] > Events:
	I0203 12:28:40.888476   13136 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0203 12:28:40.888476   13136 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0203 12:28:40.888476   13136 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0203 12:28:40.888563   13136 command_runner.go:130] >   Normal  Starting                 5m39s                  kube-proxy       
	I0203 12:28:40.888563   13136 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientMemory
	I0203 12:28:40.888563   13136 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientPID
	I0203 12:28:40.888664   13136 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:40.888750   13136 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-749300-m03 status is now: NodeHasNoDiskPressure
	I0203 12:28:40.888750   13136 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-749300-m03 status is now: NodeReady
	I0203 12:28:40.888803   13136 command_runner.go:130] >   Normal  CIDRAssignmentFailed     5m42s                  cidrAllocator    Node multinode-749300-m03 status is now: CIDRAssignmentFailed
	I0203 12:28:40.888866   13136 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m42s (x2 over 5m42s)  kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientMemory
	I0203 12:28:40.888900   13136 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m42s (x2 over 5m42s)  kubelet          Node multinode-749300-m03 status is now: NodeHasNoDiskPressure
	I0203 12:28:40.888955   13136 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m42s (x2 over 5m42s)  kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientPID
	I0203 12:28:40.888955   13136 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m42s                  kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:40.889006   13136 command_runner.go:130] >   Normal  RegisteredNode           5m41s                  node-controller  Node multinode-749300-m03 event: Registered Node multinode-749300-m03 in Controller
	I0203 12:28:40.889085   13136 command_runner.go:130] >   Normal  NodeReady                5m27s                  kubelet          Node multinode-749300-m03 status is now: NodeReady
	I0203 12:28:40.889125   13136 command_runner.go:130] >   Normal  NodeNotReady             3m50s                  node-controller  Node multinode-749300-m03 status is now: NodeNotReady
	I0203 12:28:40.889125   13136 command_runner.go:130] >   Normal  RegisteredNode           72s                    node-controller  Node multinode-749300-m03 event: Registered Node multinode-749300-m03 in Controller
	I0203 12:28:40.899700   13136 logs.go:123] Gathering logs for kube-proxy [c6dc514e98f6] ...
	I0203 12:28:40.899700   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6dc514e98f6"
	I0203 12:28:40.931168   13136 command_runner.go:130] ! I0203 12:05:01.746820       1 server_linux.go:66] "Using iptables proxy"
	I0203 12:28:40.931168   13136 command_runner.go:130] ! E0203 12:05:01.780088       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:40.931656   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0203 12:28:40.931656   13136 command_runner.go:130] ! 	add table ip kube-proxy
	I0203 12:28:40.931656   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:40.931695   13136 command_runner.go:130] !  >
	I0203 12:28:40.931695   13136 command_runner.go:130] ! E0203 12:05:01.805329       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:40.931732   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0203 12:28:40.931767   13136 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0203 12:28:40.931767   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:40.931767   13136 command_runner.go:130] !  >
	I0203 12:28:40.931767   13136 command_runner.go:130] ! I0203 12:05:01.822582       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.1.53"]
	I0203 12:28:40.931823   13136 command_runner.go:130] ! E0203 12:05:01.822737       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0203 12:28:40.931823   13136 command_runner.go:130] ! I0203 12:05:01.878001       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0203 12:28:40.931823   13136 command_runner.go:130] ! I0203 12:05:01.878049       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0203 12:28:40.931901   13136 command_runner.go:130] ! I0203 12:05:01.878079       1 server_linux.go:170] "Using iptables Proxier"
	I0203 12:28:40.931901   13136 command_runner.go:130] ! I0203 12:05:01.883741       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0203 12:28:40.931901   13136 command_runner.go:130] ! I0203 12:05:01.884139       1 server.go:497] "Version info" version="v1.32.1"
	I0203 12:28:40.931973   13136 command_runner.go:130] ! I0203 12:05:01.884172       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:40.931973   13136 command_runner.go:130] ! I0203 12:05:01.886194       1 config.go:199] "Starting service config controller"
	I0203 12:28:40.931973   13136 command_runner.go:130] ! I0203 12:05:01.886246       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0203 12:28:40.931973   13136 command_runner.go:130] ! I0203 12:05:01.886272       1 config.go:105] "Starting endpoint slice config controller"
	I0203 12:28:40.932038   13136 command_runner.go:130] ! I0203 12:05:01.886277       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0203 12:28:40.932038   13136 command_runner.go:130] ! I0203 12:05:01.886976       1 config.go:329] "Starting node config controller"
	I0203 12:28:40.932105   13136 command_runner.go:130] ! I0203 12:05:01.887004       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0203 12:28:40.932105   13136 command_runner.go:130] ! I0203 12:05:01.987328       1 shared_informer.go:320] Caches are synced for node config
	I0203 12:28:40.932105   13136 command_runner.go:130] ! I0203 12:05:01.987379       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0203 12:28:40.932105   13136 command_runner.go:130] ! I0203 12:05:01.987536       1 shared_informer.go:320] Caches are synced for service config
	I0203 12:28:40.934191   13136 logs.go:123] Gathering logs for kindnet [fab2d9be6b5c] ...
	I0203 12:28:40.935204   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fab2d9be6b5c"
	I0203 12:28:40.965370   13136 command_runner.go:130] ! I0203 12:13:59.481747       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.965370   13136 command_runner.go:130] ! I0203 12:13:59.482211       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.965370   13136 command_runner.go:130] ! I0203 12:13:59.482302       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.965848   13136 command_runner.go:130] ! I0203 12:14:09.479387       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.965848   13136 command_runner.go:130] ! I0203 12:14:09.479438       1 main.go:301] handling current node
	I0203 12:28:40.965848   13136 command_runner.go:130] ! I0203 12:14:09.479457       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.966043   13136 command_runner.go:130] ! I0203 12:14:09.479464       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.966101   13136 command_runner.go:130] ! I0203 12:14:09.480145       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.966138   13136 command_runner.go:130] ! I0203 12:14:09.480233       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.966226   13136 command_runner.go:130] ! I0203 12:14:19.488038       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.966707   13136 command_runner.go:130] ! I0203 12:14:19.488073       1 main.go:301] handling current node
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:19.488090       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:19.488096       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:19.488279       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:19.488286       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:29.479983       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:29.480097       1 main.go:301] handling current node
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:29.480118       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:29.480126       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:29.480690       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:29.480801       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:39.480046       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:39.480207       1 main.go:301] handling current node
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:39.480229       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:39.480240       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:39.480703       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:39.480794       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:49.479153       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:49.479261       1 main.go:301] handling current node
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:49.479283       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:49.479292       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:49.479491       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:49.479575       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:59.478982       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:59.479132       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:59.479435       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:59.479519       1 main.go:301] handling current node
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:59.479535       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:59.479541       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:15:09.479541       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:15:09.479593       1 main.go:301] handling current node
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:15:09.479613       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:09.479621       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:09.480303       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:09.480382       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:19.488389       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:19.488489       1 main.go:301] handling current node
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:19.488509       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:19.488517       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:19.489046       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:19.489142       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:29.481025       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:29.481131       1 main.go:301] handling current node
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:29.481151       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:29.481158       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:29.481350       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:29.481373       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:39.487726       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:39.487893       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:39.488092       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:39.488105       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:39.488232       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:39.488259       1 main.go:301] handling current node
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:49.484117       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:49.484177       1 main.go:301] handling current node
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:49.484234       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:49.484314       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:49.485204       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:49.485392       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:59.481092       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:59.481195       1 main.go:301] handling current node
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:59.481218       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:59.481226       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:59.481484       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:59.481510       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:16:09.480009       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:16:09.480236       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:16:09.480645       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:16:09.480840       1 main.go:301] handling current node
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:16:09.480969       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:16:09.481255       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:19.479435       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:19.479557       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:19.479760       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:19.479977       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:19.480328       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:19.480522       1 main.go:301] handling current node
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:29.479113       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:29.479221       1 main.go:301] handling current node
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:29.479267       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:29.479321       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:29.479572       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:29.479670       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:39.484562       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:39.484671       1 main.go:301] handling current node
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:39.484693       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:39.484700       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:39.485166       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:39.485259       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:49.488261       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:49.488416       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:49.488709       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:49.488783       1 main.go:301] handling current node
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:49.488801       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:49.488807       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:59.479138       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:59.479218       1 main.go:301] handling current node
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:59.479312       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:59.479448       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:59.480031       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:59.480132       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:09.479412       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:09.479454       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:09.479652       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:09.479680       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:09.479774       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:09.479785       1 main.go:301] handling current node
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:19.481248       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:19.481299       1 main.go:301] handling current node
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:19.481317       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:19.481324       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:19.481727       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:19.481754       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:29.479244       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972492   13136 command_runner.go:130] ! I0203 12:17:29.479364       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:29.479384       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:29.479392       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:29.480340       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:29.480488       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:39.486004       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:39.486109       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:39.486129       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:39.486137       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:39.487056       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:39.487145       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:49.479174       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:49.479407       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:49.479529       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:49.479564       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:49.480448       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:49.480489       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:59.479178       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:59.479464       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:59.479683       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:59.479843       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:59.479900       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:59.479909       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:09.479760       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:09.479855       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:09.480291       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:09.480340       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:09.480365       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:09.480374       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:19.487177       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:19.487393       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:19.487478       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:19.487578       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:19.488002       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:19.488201       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:29.479665       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:29.479790       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:29.480229       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:29.480333       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:29.480694       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:29.480800       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:39.478894       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:39.479048       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:39.479069       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:39.479077       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:39.479735       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:39.479846       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:49.487084       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:49.487121       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:49.487139       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:49.487146       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:49.487825       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:49.488251       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:59.479844       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:59.479986       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:59.480763       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:59.480852       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:59.480911       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:59.480921       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:09.479931       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:09.480043       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:09.480242       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:09.480487       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:09.480506       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:09.480516       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:19.486529       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:19.486564       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:19.486583       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:19.486590       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:19.486994       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:19.487009       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:29.480898       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:29.481006       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:29.481028       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:29.481037       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:29.481233       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:29.481256       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:39.486219       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:39.486253       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:39.486535       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:39.486547       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:39.486661       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:39.486668       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:49.486894       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:49.487004       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:49.487855       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:49.488255       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:49.488415       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:49.488578       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:59.480029       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:59.480068       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:59.480087       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:59.480095       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:59.480976       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:59.481279       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:09.480108       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:09.480217       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:09.480237       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:09.480245       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:09.480661       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:09.480744       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:19.479758       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:19.480248       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:19.480343       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:19.480356       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:19.480786       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:19.480803       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:29.479490       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:29.479617       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:29.480064       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:29.480169       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:29.480353       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:29.480368       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:39.479641       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:39.479836       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:39.479918       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:39.480224       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:39.480721       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:39.480751       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:49.479128       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:49.479242       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:49.479263       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:49.479271       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:49.479687       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:49.479937       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:59.485967       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:59.486008       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:59.486029       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:59.486037       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:59.486327       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:59.486342       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:09.479406       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:09.479537       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:09.479560       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:09.479571       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:09.480561       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:09.480668       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:19.486059       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:19.486172       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:19.486192       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:19.486199       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:19.486776       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:19.486913       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:29.479291       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:29.479421       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:29.480168       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:29.480268       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:29.480621       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:29.480720       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:39.479561       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:39.479684       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:39.480019       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:39.480130       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:39.480149       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:39.480157       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:49.485937       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:49.486015       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:49.486511       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:49.486846       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:49.487441       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:49.487470       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:59.479224       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:21:59.479388       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:21:59.479615       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:21:59.479639       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:21:59.479828       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:21:59.479942       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:09.479352       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:09.479745       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:09.480390       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:09.480426       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:09.480922       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:09.481129       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:19.480040       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:19.480088       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:19.480938       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:19.480972       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:19.481966       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:19.482194       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:29.479113       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:29.479222       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:29.479243       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:29.479251       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:29.479605       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:29.479637       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:39.488770       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:39.488806       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:39.488823       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:39.488830       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:39.489296       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:39.489449       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:49.479056       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:49.479097       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:49.479550       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:49.479661       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:49.479679       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:49.479687       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:59.478931       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:59.479023       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:59.479077       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:59.479136       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:59.479510       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:59.479604       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:59.479991       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.0.54 Flags: [] Table: 0 Realm: 0} 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:09.479836       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:09.479965       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:09.479985       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:09.479997       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:09.480363       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:09.480514       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:19.480167       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:19.480217       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:19.480239       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:19.480245       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:19.480628       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:19.480750       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:29.488733       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:29.489234       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:29.489474       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:29.489946       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:29.490535       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:29.490635       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:39.479240       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:39.479359       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:39.479382       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:39.479391       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:39.479635       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:39.479662       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:49.484665       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:49.484760       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:49.484814       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:49.484827       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:49.485522       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:49.485609       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:59.488178       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:59.488328       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:59.488725       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:59.488825       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:59.489199       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:59.489288       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:24:09.478924       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:24:09.478990       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:24:09.479043       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:24:09.479072       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:24:09.479342       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:24:09.479511       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:19.485161       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:19.485331       1 main.go:301] handling current node
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:19.485367       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:19.485388       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:19.486434       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:19.486547       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:29.479544       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:29.480058       1 main.go:301] handling current node
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:29.480294       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:29.480571       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:29.482395       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:29.482495       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:39.487057       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:39.487164       1 main.go:301] handling current node
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:39.487184       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:39.487192       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:39.487371       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:39.487395       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:49.479049       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:49.479126       1 main.go:301] handling current node
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:49.479266       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:49.479354       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:49.480131       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:49.480242       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:59.479305       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:59.479727       1 main.go:301] handling current node
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:59.479826       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:59.479839       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:59.480314       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:59.480509       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.994317   13136 logs.go:123] Gathering logs for dmesg ...
	I0203 12:28:40.994317   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 12:28:41.018064   13136 command_runner.go:130] > [Feb 3 12:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0203 12:28:41.018064   13136 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0203 12:28:41.018064   13136 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0203 12:28:41.018064   13136 command_runner.go:130] > [  +0.106774] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0203 12:28:41.018064   13136 command_runner.go:130] > [  +0.023238] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0203 12:28:41.018249   13136 command_runner.go:130] > [  +0.000004] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0203 12:28:41.018334   13136 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.060292] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.024825] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0203 12:28:41.018469   13136 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +6.580601] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +1.325226] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +1.308770] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [Feb 3 12:26] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0203 12:28:41.018469   13136 command_runner.go:130] > [ +44.595913] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.095070] kauditd_printk_skb: 4 callbacks suppressed
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.080250] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [Feb 3 12:27] systemd-fstab-generator[1026]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.111210] kauditd_printk_skb: 75 callbacks suppressed
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.499536] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.200113] systemd-fstab-generator[1078]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.221690] systemd-fstab-generator[1092]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +2.970290] systemd-fstab-generator[1331]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.201836] systemd-fstab-generator[1343]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.192903] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.251653] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.851149] systemd-fstab-generator[1495]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.100990] kauditd_printk_skb: 206 callbacks suppressed
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +3.722313] systemd-fstab-generator[1639]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +1.365001] kauditd_printk_skb: 44 callbacks suppressed
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +5.747815] kauditd_printk_skb: 30 callbacks suppressed
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +3.773287] systemd-fstab-generator[2531]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [ +27.270277] kauditd_printk_skb: 70 callbacks suppressed
	I0203 12:28:41.020436   13136 logs.go:123] Gathering logs for kube-apiserver [6c19e0a0ba9c] ...
	I0203 12:28:41.020436   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c19e0a0ba9c"
	I0203 12:28:41.048146   13136 command_runner.go:130] ! W0203 12:27:22.209566       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0203 12:28:41.048320   13136 command_runner.go:130] ! I0203 12:27:22.212385       1 options.go:238] external host was not specified, using 172.25.12.244
	I0203 12:28:41.048320   13136 command_runner.go:130] ! I0203 12:27:22.215411       1 server.go:143] Version: v1.32.1
	I0203 12:28:41.048320   13136 command_runner.go:130] ! I0203 12:27:22.215519       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:41.048320   13136 command_runner.go:130] ! I0203 12:27:22.961695       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0203 12:28:41.048391   13136 command_runner.go:130] ! I0203 12:27:22.981400       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0203 12:28:41.048435   13136 command_runner.go:130] ! I0203 12:27:22.991076       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0203 12:28:41.048513   13136 command_runner.go:130] ! I0203 12:27:22.991179       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0203 12:28:41.048513   13136 command_runner.go:130] ! I0203 12:27:22.995374       1 instance.go:233] Using reconciler: lease
	I0203 12:28:41.048551   13136 command_runner.go:130] ! I0203 12:27:23.455051       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0203 12:28:41.048576   13136 command_runner.go:130] ! W0203 12:27:23.455431       1 genericapiserver.go:767] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.048625   13136 command_runner.go:130] ! I0203 12:27:23.772863       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0203 12:28:41.048625   13136 command_runner.go:130] ! I0203 12:27:23.773118       1 apis.go:106] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0203 12:28:41.048671   13136 command_runner.go:130] ! I0203 12:27:24.011206       1 apis.go:106] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0203 12:28:41.048671   13136 command_runner.go:130] ! I0203 12:27:24.156938       1 apis.go:106] API group "resource.k8s.io" is not enabled, skipping.
	I0203 12:28:41.048720   13136 command_runner.go:130] ! I0203 12:27:24.167831       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0203 12:28:41.048720   13136 command_runner.go:130] ! W0203 12:27:24.167952       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.048767   13136 command_runner.go:130] ! W0203 12:27:24.167965       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:41.048767   13136 command_runner.go:130] ! I0203 12:27:24.168630       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0203 12:28:41.048767   13136 command_runner.go:130] ! W0203 12:27:24.168731       1 genericapiserver.go:767] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.048816   13136 command_runner.go:130] ! I0203 12:27:24.169810       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0203 12:28:41.048816   13136 command_runner.go:130] ! I0203 12:27:24.170800       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0203 12:28:41.048862   13136 command_runner.go:130] ! W0203 12:27:24.170918       1 genericapiserver.go:767] Skipping API autoscaling/v2beta1 because it has no resources.
	I0203 12:28:41.048862   13136 command_runner.go:130] ! W0203 12:27:24.170928       1 genericapiserver.go:767] Skipping API autoscaling/v2beta2 because it has no resources.
	I0203 12:28:41.048910   13136 command_runner.go:130] ! I0203 12:27:24.172706       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0203 12:28:41.048910   13136 command_runner.go:130] ! W0203 12:27:24.172818       1 genericapiserver.go:767] Skipping API batch/v1beta1 because it has no resources.
	I0203 12:28:41.048956   13136 command_runner.go:130] ! I0203 12:27:24.173842       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0203 12:28:41.048956   13136 command_runner.go:130] ! W0203 12:27:24.173955       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.048956   13136 command_runner.go:130] ! W0203 12:27:24.173976       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:41.049006   13136 command_runner.go:130] ! I0203 12:27:24.174699       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0203 12:28:41.049006   13136 command_runner.go:130] ! W0203 12:27:24.174807       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.049052   13136 command_runner.go:130] ! W0203 12:27:24.174815       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1alpha2 because it has no resources.
	I0203 12:28:41.049052   13136 command_runner.go:130] ! I0203 12:27:24.175562       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0203 12:28:41.049100   13136 command_runner.go:130] ! W0203 12:27:24.175675       1 genericapiserver.go:767] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.049100   13136 command_runner.go:130] ! I0203 12:27:24.177712       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0203 12:28:41.049146   13136 command_runner.go:130] ! W0203 12:27:24.177817       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.049146   13136 command_runner.go:130] ! W0203 12:27:24.177827       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:41.049194   13136 command_runner.go:130] ! I0203 12:27:24.178337       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0203 12:28:41.049240   13136 command_runner.go:130] ! W0203 12:27:24.178525       1 genericapiserver.go:767] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.049240   13136 command_runner.go:130] ! W0203 12:27:24.178534       1 genericapiserver.go:767] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:41.049289   13136 command_runner.go:130] ! I0203 12:27:24.179521       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0203 12:28:41.049289   13136 command_runner.go:130] ! W0203 12:27:24.179622       1 genericapiserver.go:767] Skipping API policy/v1beta1 because it has no resources.
	I0203 12:28:41.049334   13136 command_runner.go:130] ! I0203 12:27:24.181744       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0203 12:28:41.049334   13136 command_runner.go:130] ! W0203 12:27:24.181838       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.049334   13136 command_runner.go:130] ! W0203 12:27:24.181848       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:41.049383   13136 command_runner.go:130] ! I0203 12:27:24.182574       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0203 12:28:41.049383   13136 command_runner.go:130] ! W0203 12:27:24.182612       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.049430   13136 command_runner.go:130] ! W0203 12:27:24.182619       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:41.049430   13136 command_runner.go:130] ! I0203 12:27:24.185237       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0203 12:28:41.049479   13136 command_runner.go:130] ! W0203 12:27:24.185340       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.049479   13136 command_runner.go:130] ! W0203 12:27:24.185438       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:41.049524   13136 command_runner.go:130] ! I0203 12:27:24.187067       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0203 12:28:41.049524   13136 command_runner.go:130] ! W0203 12:27:24.187189       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta3 because it has no resources.
	I0203 12:28:41.049572   13136 command_runner.go:130] ! W0203 12:27:24.187200       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0203 12:28:41.049572   13136 command_runner.go:130] ! W0203 12:27:24.187204       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.049619   13136 command_runner.go:130] ! I0203 12:27:24.193311       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0203 12:28:41.049619   13136 command_runner.go:130] ! W0203 12:27:24.193504       1 genericapiserver.go:767] Skipping API apps/v1beta2 because it has no resources.
	I0203 12:28:41.049619   13136 command_runner.go:130] ! W0203 12:27:24.193516       1 genericapiserver.go:767] Skipping API apps/v1beta1 because it has no resources.
	I0203 12:28:41.049667   13136 command_runner.go:130] ! I0203 12:27:24.195828       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0203 12:28:41.049667   13136 command_runner.go:130] ! W0203 12:27:24.195943       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.049713   13136 command_runner.go:130] ! W0203 12:27:24.195952       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:41.049713   13136 command_runner.go:130] ! I0203 12:27:24.196821       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0203 12:28:41.049761   13136 command_runner.go:130] ! W0203 12:27:24.196925       1 genericapiserver.go:767] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.049761   13136 command_runner.go:130] ! I0203 12:27:24.210087       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0203 12:28:41.049807   13136 command_runner.go:130] ! W0203 12:27:24.210106       1 genericapiserver.go:767] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.049807   13136 command_runner.go:130] ! I0203 12:27:24.794572       1 secure_serving.go:213] Serving securely on [::]:8443
	I0203 12:28:41.049855   13136 command_runner.go:130] ! I0203 12:27:24.794794       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0203 12:28:41.049902   13136 command_runner.go:130] ! I0203 12:27:24.795068       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:41.049902   13136 command_runner.go:130] ! I0203 12:27:24.795407       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:41.049950   13136 command_runner.go:130] ! I0203 12:27:24.802046       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:41.049950   13136 command_runner.go:130] ! I0203 12:27:24.802388       1 local_available_controller.go:156] Starting LocalAvailability controller
	I0203 12:28:41.049995   13136 command_runner.go:130] ! I0203 12:27:24.802453       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I0203 12:28:41.049995   13136 command_runner.go:130] ! I0203 12:27:24.803591       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I0203 12:28:41.050044   13136 command_runner.go:130] ! I0203 12:27:24.803646       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0203 12:28:41.050044   13136 command_runner.go:130] ! I0203 12:27:24.803948       1 controller.go:78] Starting OpenAPI AggregationController
	I0203 12:28:41.050090   13136 command_runner.go:130] ! I0203 12:27:24.804549       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0203 12:28:41.050090   13136 command_runner.go:130] ! I0203 12:27:24.805072       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I0203 12:28:41.050090   13136 command_runner.go:130] ! I0203 12:27:24.805137       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I0203 12:28:41.050138   13136 command_runner.go:130] ! I0203 12:27:24.805149       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0203 12:28:41.050138   13136 command_runner.go:130] ! I0203 12:27:24.805622       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I0203 12:28:41.050184   13136 command_runner.go:130] ! I0203 12:27:24.805888       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I0203 12:28:41.050184   13136 command_runner.go:130] ! I0203 12:27:24.806059       1 aggregator.go:169] waiting for initial CRD sync...
	I0203 12:28:41.050234   13136 command_runner.go:130] ! I0203 12:27:24.806071       1 cluster_authentication_trust_controller.go:462] Starting cluster_authentication_trust_controller controller
	I0203 12:28:41.050234   13136 command_runner.go:130] ! I0203 12:27:24.806336       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0203 12:28:41.050280   13136 command_runner.go:130] ! I0203 12:27:24.815482       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:41.050280   13136 command_runner.go:130] ! I0203 12:27:24.815778       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:41.050280   13136 command_runner.go:130] ! I0203 12:27:24.857328       1 controller.go:142] Starting OpenAPI controller
	I0203 12:28:41.050328   13136 command_runner.go:130] ! I0203 12:27:24.857674       1 controller.go:90] Starting OpenAPI V3 controller
	I0203 12:28:41.050328   13136 command_runner.go:130] ! I0203 12:27:24.857889       1 naming_controller.go:294] Starting NamingConditionController
	I0203 12:28:41.050374   13136 command_runner.go:130] ! I0203 12:27:24.858090       1 establishing_controller.go:81] Starting EstablishingController
	I0203 12:28:41.050374   13136 command_runner.go:130] ! I0203 12:27:24.858264       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0203 12:28:41.050422   13136 command_runner.go:130] ! I0203 12:27:24.858511       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0203 12:28:41.050422   13136 command_runner.go:130] ! I0203 12:27:24.858696       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0203 12:28:41.050422   13136 command_runner.go:130] ! I0203 12:27:24.805624       1 controller.go:119] Starting legacy_token_tracking_controller
	I0203 12:28:41.050469   13136 command_runner.go:130] ! I0203 12:27:24.859559       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0203 12:28:41.050469   13136 command_runner.go:130] ! I0203 12:27:24.859779       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0203 12:28:41.050518   13136 command_runner.go:130] ! I0203 12:27:24.859901       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0203 12:28:41.050518   13136 command_runner.go:130] ! I0203 12:27:24.805642       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0203 12:28:41.050518   13136 command_runner.go:130] ! I0203 12:27:24.805842       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I0203 12:28:41.050572   13136 command_runner.go:130] ! I0203 12:27:24.960247       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0203 12:28:41.050572   13136 command_runner.go:130] ! I0203 12:27:24.962958       1 aggregator.go:171] initial CRD sync complete...
	I0203 12:28:41.050572   13136 command_runner.go:130] ! I0203 12:27:24.963020       1 autoregister_controller.go:144] Starting autoregister controller
	I0203 12:28:41.050621   13136 command_runner.go:130] ! I0203 12:27:24.963034       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0203 12:28:41.050667   13136 command_runner.go:130] ! I0203 12:27:24.983465       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0203 12:28:41.050667   13136 command_runner.go:130] ! I0203 12:27:24.983682       1 policy_source.go:240] refreshing policies
	I0203 12:28:41.050667   13136 command_runner.go:130] ! I0203 12:27:24.988524       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0203 12:28:41.050716   13136 command_runner.go:130] ! I0203 12:27:25.002635       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0203 12:28:41.050762   13136 command_runner.go:130] ! I0203 12:27:25.006114       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0203 12:28:41.050762   13136 command_runner.go:130] ! I0203 12:27:25.007504       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0203 12:28:41.050815   13136 command_runner.go:130] ! I0203 12:27:25.021232       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0203 12:28:41.050815   13136 command_runner.go:130] ! I0203 12:27:25.021549       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0203 12:28:41.050861   13136 command_runner.go:130] ! I0203 12:27:25.021784       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0203 12:28:41.050861   13136 command_runner.go:130] ! I0203 12:27:25.040252       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0203 12:28:41.050861   13136 command_runner.go:130] ! I0203 12:27:25.063391       1 cache.go:39] Caches are synced for autoregister controller
	I0203 12:28:41.050910   13136 command_runner.go:130] ! I0203 12:27:25.063942       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0203 12:28:41.050910   13136 command_runner.go:130] ! I0203 12:27:25.064322       1 shared_informer.go:320] Caches are synced for configmaps
	I0203 12:28:41.050910   13136 command_runner.go:130] ! I0203 12:27:25.809340       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0203 12:28:41.050962   13136 command_runner.go:130] ! I0203 12:27:25.881836       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0203 12:28:41.050962   13136 command_runner.go:130] ! W0203 12:27:26.443758       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.25.12.244]
	I0203 12:28:41.051011   13136 command_runner.go:130] ! I0203 12:27:26.447833       1 controller.go:615] quota admission added evaluator for: endpoints
	I0203 12:28:41.051011   13136 command_runner.go:130] ! I0203 12:27:26.461396       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0203 12:28:41.051011   13136 command_runner.go:130] ! I0203 12:27:27.972522       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0203 12:28:41.051056   13136 command_runner.go:130] ! I0203 12:27:28.290141       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0203 12:28:41.051056   13136 command_runner.go:130] ! I0203 12:27:28.509424       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0203 12:28:41.051106   13136 command_runner.go:130] ! I0203 12:27:28.520726       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0203 12:28:41.051106   13136 command_runner.go:130] ! I0203 12:27:28.561004       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0203 12:28:41.060764   13136 logs.go:123] Gathering logs for kube-scheduler [2e43c2ecb4a9] ...
	I0203 12:28:41.060764   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e43c2ecb4a9"
	I0203 12:28:41.091755   13136 command_runner.go:130] ! I0203 12:27:23.141470       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:41.091755   13136 command_runner.go:130] ! W0203 12:27:24.897433       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0203 12:28:41.091755   13136 command_runner.go:130] ! W0203 12:27:24.897513       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:41.091755   13136 command_runner.go:130] ! W0203 12:27:24.897526       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0203 12:28:41.091755   13136 command_runner.go:130] ! W0203 12:27:24.897538       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0203 12:28:41.091755   13136 command_runner.go:130] ! I0203 12:27:25.033204       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0203 12:28:41.091755   13136 command_runner.go:130] ! I0203 12:27:25.033541       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:41.091755   13136 command_runner.go:130] ! I0203 12:27:25.041065       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0203 12:28:41.091755   13136 command_runner.go:130] ! I0203 12:27:25.044977       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:41.091755   13136 command_runner.go:130] ! I0203 12:27:25.045234       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 12:28:41.091755   13136 command_runner.go:130] ! I0203 12:27:25.045638       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:41.091755   13136 command_runner.go:130] ! I0203 12:27:25.146094       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:41.094767   13136 logs.go:123] Gathering logs for kube-scheduler [88c40ca9aa3c] ...
	I0203 12:28:41.094839   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c40ca9aa3c"
	I0203 12:28:41.125303   13136 command_runner.go:130] ! I0203 12:04:50.173813       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:41.125786   13136 command_runner.go:130] ! W0203 12:04:52.061949       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0203 12:28:41.125950   13136 command_runner.go:130] ! W0203 12:04:52.062136       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:41.125950   13136 command_runner.go:130] ! W0203 12:04:52.062240       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0203 12:28:41.125950   13136 command_runner.go:130] ! W0203 12:04:52.062322       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0203 12:28:41.125950   13136 command_runner.go:130] ! I0203 12:04:52.183111       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0203 12:28:41.125950   13136 command_runner.go:130] ! I0203 12:04:52.183265       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:41.125950   13136 command_runner.go:130] ! I0203 12:04:52.186981       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0203 12:28:41.125950   13136 command_runner.go:130] ! I0203 12:04:52.187238       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 12:28:41.125950   13136 command_runner.go:130] ! I0203 12:04:52.187329       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:41.125950   13136 command_runner.go:130] ! I0203 12:04:52.190286       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:41.125950   13136 command_runner.go:130] ! W0203 12:04:52.193791       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0203 12:28:41.125950   13136 command_runner.go:130] ! E0203 12:04:52.193853       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.125950   13136 command_runner.go:130] ! W0203 12:04:52.194153       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0203 12:28:41.125950   13136 command_runner.go:130] ! E0203 12:04:52.194308       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.126481   13136 command_runner.go:130] ! W0203 12:04:52.194637       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:41.126481   13136 command_runner.go:130] ! E0203 12:04:52.195017       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.126579   13136 command_runner.go:130] ! W0203 12:04:52.194800       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0203 12:28:41.126625   13136 command_runner.go:130] ! E0203 12:04:52.195139       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.126665   13136 command_runner.go:130] ! W0203 12:04:52.194975       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0203 12:28:41.126665   13136 command_runner.go:130] ! E0203 12:04:52.195284       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.126736   13136 command_runner.go:130] ! W0203 12:04:52.196729       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0203 12:28:41.126736   13136 command_runner.go:130] ! E0203 12:04:52.197161       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.126736   13136 command_runner.go:130] ! W0203 12:04:52.196961       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0203 12:28:41.126857   13136 command_runner.go:130] ! E0203 12:04:52.197453       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.126857   13136 command_runner.go:130] ! W0203 12:04:52.197005       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:41.126919   13136 command_runner.go:130] ! E0203 12:04:52.197828       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.126919   13136 command_runner.go:130] ! W0203 12:04:52.197050       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0203 12:28:41.126981   13136 command_runner.go:130] ! E0203 12:04:52.198044       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.126981   13136 command_runner.go:130] ! W0203 12:04:52.197096       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0203 12:28:41.127050   13136 command_runner.go:130] ! E0203 12:04:52.198641       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.127050   13136 command_runner.go:130] ! W0203 12:04:52.200812       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:41.127135   13136 command_runner.go:130] ! E0203 12:04:52.201002       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0203 12:28:41.127135   13136 command_runner.go:130] ! W0203 12:04:52.201197       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0203 12:28:41.127205   13136 command_runner.go:130] ! E0203 12:04:52.201287       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.127269   13136 command_runner.go:130] ! W0203 12:04:52.201462       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:41.127269   13136 command_runner.go:130] ! E0203 12:04:52.201749       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.127336   13136 command_runner.go:130] ! W0203 12:04:52.203997       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0203 12:28:41.127336   13136 command_runner.go:130] ! E0203 12:04:52.204039       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.127402   13136 command_runner.go:130] ! W0203 12:04:52.204263       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:41.127402   13136 command_runner.go:130] ! E0203 12:04:52.204370       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.127471   13136 command_runner.go:130] ! W0203 12:04:52.204862       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:41.127471   13136 command_runner.go:130] ! E0203 12:04:52.205088       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.127547   13136 command_runner.go:130] ! W0203 12:04:53.007728       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:41.127616   13136 command_runner.go:130] ! E0203 12:04:53.008599       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.127616   13136 command_runner.go:130] ! W0203 12:04:53.048183       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0203 12:28:41.127681   13136 command_runner.go:130] ! E0203 12:04:53.048434       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.127681   13136 command_runner.go:130] ! W0203 12:04:53.164447       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0203 12:28:41.127751   13136 command_runner.go:130] ! E0203 12:04:53.165061       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.127751   13136 command_runner.go:130] ! W0203 12:04:53.169067       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0203 12:28:41.127824   13136 command_runner.go:130] ! E0203 12:04:53.169917       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.127824   13136 command_runner.go:130] ! W0203 12:04:53.247439       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:41.127895   13136 command_runner.go:130] ! E0203 12:04:53.247628       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.127895   13136 command_runner.go:130] ! W0203 12:04:53.427203       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0203 12:28:41.127977   13136 command_runner.go:130] ! E0203 12:04:53.427543       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.128047   13136 command_runner.go:130] ! W0203 12:04:53.471735       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:41.128114   13136 command_runner.go:130] ! E0203 12:04:53.471980       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.128173   13136 command_runner.go:130] ! W0203 12:04:53.482216       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0203 12:28:41.128244   13136 command_runner.go:130] ! E0203 12:04:53.482267       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.128244   13136 command_runner.go:130] ! W0203 12:04:53.497579       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0203 12:28:41.128290   13136 command_runner.go:130] ! E0203 12:04:53.497628       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.128337   13136 command_runner.go:130] ! W0203 12:04:53.544588       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:41.128383   13136 command_runner.go:130] ! E0203 12:04:53.545097       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0203 12:28:41.128383   13136 command_runner.go:130] ! W0203 12:04:53.614992       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0203 12:28:41.128423   13136 command_runner.go:130] ! E0203 12:04:53.615323       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.128468   13136 command_runner.go:130] ! W0203 12:04:53.655102       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0203 12:28:41.128508   13136 command_runner.go:130] ! E0203 12:04:53.655499       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.128508   13136 command_runner.go:130] ! W0203 12:04:53.655303       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0203 12:28:41.128595   13136 command_runner.go:130] ! E0203 12:04:53.656094       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.128595   13136 command_runner.go:130] ! W0203 12:04:53.713710       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:41.128680   13136 command_runner.go:130] ! E0203 12:04:53.713767       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.128680   13136 command_runner.go:130] ! W0203 12:04:53.764352       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0203 12:28:41.128724   13136 command_runner.go:130] ! E0203 12:04:53.764706       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.128765   13136 command_runner.go:130] ! W0203 12:04:53.799751       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:41.128811   13136 command_runner.go:130] ! E0203 12:04:53.800034       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.128851   13136 command_runner.go:130] ! I0203 12:04:56.288855       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:41.128851   13136 command_runner.go:130] ! I0203 12:25:02.182209       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0203 12:28:41.128897   13136 command_runner.go:130] ! I0203 12:25:02.205551       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 12:28:41.128897   13136 command_runner.go:130] ! I0203 12:25:02.205980       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0203 12:28:41.128897   13136 command_runner.go:130] ! E0203 12:25:02.233103       1 run.go:72] "command failed" err="finished without leader elect"
	I0203 12:28:41.141989   13136 logs.go:123] Gathering logs for kube-proxy [cf33452e7244] ...
	I0203 12:28:41.142983   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf33452e7244"
	I0203 12:28:41.170977   13136 command_runner.go:130] ! I0203 12:27:27.874759       1 server_linux.go:66] "Using iptables proxy"
	I0203 12:28:41.170977   13136 command_runner.go:130] ! E0203 12:27:28.000541       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:41.170977   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0203 12:28:41.171399   13136 command_runner.go:130] ! 	add table ip kube-proxy
	I0203 12:28:41.171399   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:41.171399   13136 command_runner.go:130] !  >
	I0203 12:28:41.171399   13136 command_runner.go:130] ! E0203 12:27:28.027381       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:41.171399   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0203 12:28:41.171510   13136 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0203 12:28:41.171533   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:41.171533   13136 command_runner.go:130] !  >
	I0203 12:28:41.171621   13136 command_runner.go:130] ! I0203 12:27:28.187333       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.12.244"]
	I0203 12:28:41.171621   13136 command_runner.go:130] ! E0203 12:27:28.189467       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0203 12:28:41.171669   13136 command_runner.go:130] ! I0203 12:27:28.571807       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0203 12:28:41.171669   13136 command_runner.go:130] ! I0203 12:27:28.573724       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0203 12:28:41.171777   13136 command_runner.go:130] ! I0203 12:27:28.574028       1 server_linux.go:170] "Using iptables Proxier"
	I0203 12:28:41.171843   13136 command_runner.go:130] ! I0203 12:27:28.580953       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0203 12:28:41.171843   13136 command_runner.go:130] ! I0203 12:27:28.586727       1 server.go:497] "Version info" version="v1.32.1"
	I0203 12:28:41.171843   13136 command_runner.go:130] ! I0203 12:27:28.590708       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:41.171927   13136 command_runner.go:130] ! I0203 12:27:28.619546       1 config.go:199] "Starting service config controller"
	I0203 12:28:41.171927   13136 command_runner.go:130] ! I0203 12:27:28.621538       1 config.go:105] "Starting endpoint slice config controller"
	I0203 12:28:41.171927   13136 command_runner.go:130] ! I0203 12:27:28.621733       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0203 12:28:41.171927   13136 command_runner.go:130] ! I0203 12:27:28.623181       1 config.go:329] "Starting node config controller"
	I0203 12:28:41.172003   13136 command_runner.go:130] ! I0203 12:27:28.623915       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0203 12:28:41.172003   13136 command_runner.go:130] ! I0203 12:27:28.626746       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0203 12:28:41.172043   13136 command_runner.go:130] ! I0203 12:27:28.627120       1 shared_informer.go:320] Caches are synced for service config
	I0203 12:28:41.172043   13136 command_runner.go:130] ! I0203 12:27:28.722206       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0203 12:28:41.172043   13136 command_runner.go:130] ! I0203 12:27:28.724853       1 shared_informer.go:320] Caches are synced for node config
	I0203 12:28:41.176171   13136 logs.go:123] Gathering logs for coredns [edb5f00f1042] ...
	I0203 12:28:41.176171   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edb5f00f1042"
	I0203 12:28:41.205878   13136 command_runner.go:130] > .:53
	I0203 12:28:41.205931   13136 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3e8130cfa8e96169e54fdb81903f9b4680c96074b93281de316a617894d613269c265db78cbf1be00f04df6f27627d689838921ad115c7f1fadc26b632a43f17
	I0203 12:28:41.205931   13136 command_runner.go:130] > CoreDNS-1.11.3
	I0203 12:28:41.205980   13136 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0203 12:28:41.205980   13136 command_runner.go:130] > [INFO] 127.0.0.1:49536 - 20223 "HINFO IN 8316577845745372206.6425600211286211531. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049207769s
	I0203 12:28:41.206466   13136 logs.go:123] Gathering logs for kube-controller-manager [fa5ab1df8985] ...
	I0203 12:28:41.206514   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa5ab1df8985"
	I0203 12:28:41.236084   13136 command_runner.go:130] ! I0203 12:27:22.909691       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:41.236084   13136 command_runner.go:130] ! I0203 12:27:23.402652       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0203 12:28:41.236531   13136 command_runner.go:130] ! I0203 12:27:23.402986       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:41.236531   13136 command_runner.go:130] ! I0203 12:27:23.406564       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:41.236531   13136 command_runner.go:130] ! I0203 12:27:23.406976       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:41.236531   13136 command_runner.go:130] ! I0203 12:27:23.407714       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0203 12:28:41.236531   13136 command_runner.go:130] ! I0203 12:27:23.407940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:41.236625   13136 command_runner.go:130] ! I0203 12:27:26.898379       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0203 12:28:41.236625   13136 command_runner.go:130] ! I0203 12:27:26.903089       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0203 12:28:41.236625   13136 command_runner.go:130] ! I0203 12:27:26.920491       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0203 12:28:41.236625   13136 command_runner.go:130] ! I0203 12:27:26.921386       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0203 12:28:41.236625   13136 command_runner.go:130] ! I0203 12:27:26.921411       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0203 12:28:41.236745   13136 command_runner.go:130] ! I0203 12:27:26.927675       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0203 12:28:41.236745   13136 command_runner.go:130] ! I0203 12:27:26.928004       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0203 12:28:41.236745   13136 command_runner.go:130] ! I0203 12:27:26.928034       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0203 12:28:41.236745   13136 command_runner.go:130] ! I0203 12:27:26.930586       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0203 12:28:41.236833   13136 command_runner.go:130] ! I0203 12:27:26.930784       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:26.930813       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:26.933480       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:26.933510       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:26.933688       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:26.937614       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:26.937802       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:26.937815       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:26.941806       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:26.942027       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:26.942037       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0203 12:28:41.236861   13136 command_runner.go:130] ! W0203 12:27:26.985553       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:27.000401       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:27.000471       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:27.002441       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:27.002463       1 shared_informer.go:313] Waiting for caches to sync for node
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:27.005161       1 shared_informer.go:320] Caches are synced for tokens
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:27.005494       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:27.005531       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:27.006525       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:27.006554       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0203 12:28:41.237442   13136 command_runner.go:130] ! I0203 12:27:27.006561       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0203 12:28:41.237442   13136 command_runner.go:130] ! I0203 12:27:27.018211       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0203 12:28:41.237442   13136 command_runner.go:130] ! I0203 12:27:27.020298       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:41.237493   13136 command_runner.go:130] ! I0203 12:27:27.020315       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0203 12:28:41.237567   13136 command_runner.go:130] ! I0203 12:27:27.020476       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:41.237567   13136 command_runner.go:130] ! I0203 12:27:27.020496       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0203 12:28:41.237567   13136 command_runner.go:130] ! I0203 12:27:27.020523       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0203 12:28:41.237632   13136 command_runner.go:130] ! I0203 12:27:27.020531       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0203 12:28:41.237632   13136 command_runner.go:130] ! I0203 12:27:27.035455       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0203 12:28:41.237632   13136 command_runner.go:130] ! I0203 12:27:27.035474       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0203 12:28:41.237702   13136 command_runner.go:130] ! I0203 12:27:27.036405       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0203 12:28:41.237702   13136 command_runner.go:130] ! I0203 12:27:27.036423       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0203 12:28:41.237702   13136 command_runner.go:130] ! I0203 12:27:27.036035       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0203 12:28:41.237702   13136 command_runner.go:130] ! I0203 12:27:27.044089       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0203 12:28:41.237797   13136 command_runner.go:130] ! I0203 12:27:27.044099       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0203 12:28:41.237797   13136 command_runner.go:130] ! I0203 12:27:27.055692       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0203 12:28:41.237797   13136 command_runner.go:130] ! I0203 12:27:27.056054       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0203 12:28:41.237797   13136 command_runner.go:130] ! I0203 12:27:27.056069       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0203 12:28:41.237797   13136 command_runner.go:130] ! I0203 12:27:27.078626       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0203 12:28:41.237867   13136 command_runner.go:130] ! I0203 12:27:27.078816       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0203 12:28:41.237904   13136 command_runner.go:130] ! I0203 12:27:27.078939       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0203 12:28:41.237947   13136 command_runner.go:130] ! I0203 12:27:27.078953       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.092379       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.092403       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.092472       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.093806       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.094076       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.094201       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.094716       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.095015       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.095085       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.095525       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.095975       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.095995       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.096141       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.105052       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.108021       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.108044       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.108849       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.111028       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.111046       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.178113       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.178273       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.181884       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.182308       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.182384       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.182422       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.220586       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.220908       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.221122       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.254107       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0203 12:28:41.238523   13136 command_runner.go:130] ! I0203 12:27:27.259526       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0203 12:28:41.238523   13136 command_runner.go:130] ! I0203 12:27:27.259566       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0203 12:28:41.238523   13136 command_runner.go:130] ! I0203 12:27:27.259616       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0203 12:28:41.238523   13136 command_runner.go:130] ! I0203 12:27:27.259642       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0203 12:28:41.238611   13136 command_runner.go:130] ! W0203 12:27:27.259665       1 shared_informer.go:597] resyncPeriod 16h18m36.581327018s is smaller than resyncCheckPeriod 16h18m48.925429448s and the informer has already started. Changing it to 16h18m48.925429448s
	I0203 12:28:41.238655   13136 command_runner.go:130] ! I0203 12:27:27.259798       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0203 12:28:41.238697   13136 command_runner.go:130] ! I0203 12:27:27.259831       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0203 12:28:41.238697   13136 command_runner.go:130] ! I0203 12:27:27.259851       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0203 12:28:41.238743   13136 command_runner.go:130] ! I0203 12:27:27.259880       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0203 12:28:41.238784   13136 command_runner.go:130] ! I0203 12:27:27.259900       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0203 12:28:41.238819   13136 command_runner.go:130] ! I0203 12:27:27.259918       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0203 12:28:41.238819   13136 command_runner.go:130] ! I0203 12:27:27.259931       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0203 12:28:41.238859   13136 command_runner.go:130] ! I0203 12:27:27.259951       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0203 12:28:41.238894   13136 command_runner.go:130] ! I0203 12:27:27.259973       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0203 12:28:41.238933   13136 command_runner.go:130] ! I0203 12:27:27.259996       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0203 12:28:41.238975   13136 command_runner.go:130] ! I0203 12:27:27.260019       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0203 12:28:41.239015   13136 command_runner.go:130] ! I0203 12:27:27.260033       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0203 12:28:41.239057   13136 command_runner.go:130] ! W0203 12:27:27.260043       1 shared_informer.go:597] resyncPeriod 12h21m15.604254037s is smaller than resyncCheckPeriod 16h18m48.925429448s and the informer has already started. Changing it to 16h18m48.925429448s
	I0203 12:28:41.239057   13136 command_runner.go:130] ! I0203 12:27:27.260097       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0203 12:28:41.239097   13136 command_runner.go:130] ! I0203 12:27:27.260171       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0203 12:28:41.239137   13136 command_runner.go:130] ! I0203 12:27:27.260229       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0203 12:28:41.239176   13136 command_runner.go:130] ! I0203 12:27:27.260265       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0203 12:28:41.239211   13136 command_runner.go:130] ! I0203 12:27:27.260486       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0203 12:28:41.239250   13136 command_runner.go:130] ! I0203 12:27:27.260501       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:41.239285   13136 command_runner.go:130] ! I0203 12:27:27.260524       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0203 12:28:41.239325   13136 command_runner.go:130] ! I0203 12:27:27.267963       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0203 12:28:41.239366   13136 command_runner.go:130] ! I0203 12:27:27.267980       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0203 12:28:41.239405   13136 command_runner.go:130] ! I0203 12:27:27.268261       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0203 12:28:41.239440   13136 command_runner.go:130] ! I0203 12:27:27.268271       1 shared_informer.go:313] Waiting for caches to sync for job
	I0203 12:28:41.239479   13136 command_runner.go:130] ! I0203 12:27:27.275304       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0203 12:28:41.239520   13136 command_runner.go:130] ! I0203 12:27:27.275791       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0203 12:28:41.239560   13136 command_runner.go:130] ! I0203 12:27:27.275805       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0203 12:28:41.239595   13136 command_runner.go:130] ! I0203 12:27:27.282846       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0203 12:28:41.239595   13136 command_runner.go:130] ! I0203 12:27:27.285688       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0203 12:28:41.239635   13136 command_runner.go:130] ! I0203 12:27:27.285931       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0203 12:28:41.239675   13136 command_runner.go:130] ! I0203 12:27:27.285943       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0203 12:28:41.239675   13136 command_runner.go:130] ! I0203 12:27:27.285971       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0203 12:28:41.239715   13136 command_runner.go:130] ! I0203 12:27:27.285981       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0203 12:28:41.239715   13136 command_runner.go:130] ! I0203 12:27:27.294816       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0203 12:28:41.239749   13136 command_runner.go:130] ! I0203 12:27:27.294925       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0203 12:28:41.239789   13136 command_runner.go:130] ! I0203 12:27:27.294936       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0203 12:28:41.239823   13136 command_runner.go:130] ! I0203 12:27:27.318951       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0203 12:28:41.239863   13136 command_runner.go:130] ! I0203 12:27:27.319030       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0203 12:28:41.239904   13136 command_runner.go:130] ! I0203 12:27:27.319040       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0203 12:28:41.239904   13136 command_runner.go:130] ! I0203 12:27:27.355026       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0203 12:28:41.239944   13136 command_runner.go:130] ! I0203 12:27:27.355145       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0203 12:28:41.239944   13136 command_runner.go:130] ! I0203 12:27:27.355157       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0203 12:28:41.239985   13136 command_runner.go:130] ! I0203 12:27:27.502334       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0203 12:28:41.240025   13136 command_runner.go:130] ! I0203 12:27:27.502612       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:41.240025   13136 command_runner.go:130] ! I0203 12:27:27.503231       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0203 12:28:41.240065   13136 command_runner.go:130] ! I0203 12:27:27.503509       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0203 12:28:41.240065   13136 command_runner.go:130] ! I0203 12:27:27.601804       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.601861       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.702241       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.702332       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.702378       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.702389       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.752020       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.752619       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.752706       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.803085       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.803455       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.803481       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.855074       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.855248       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.855184       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.855399       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.906335       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.906694       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.906991       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.907151       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.952285       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.952811       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.953099       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:28.007756       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:28.008110       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:28.008081       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:28.008316       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:28.056312       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:28.059984       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:28.060009       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:28.076985       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:41.240657   13136 command_runner.go:130] ! I0203 12:27:28.123054       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300\" does not exist"
	I0203 12:28:41.240657   13136 command_runner.go:130] ! I0203 12:27:28.125466       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m02\" does not exist"
	I0203 12:28:41.240657   13136 command_runner.go:130] ! I0203 12:27:28.127487       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m03\" does not exist"
	I0203 12:28:41.240657   13136 command_runner.go:130] ! I0203 12:27:28.128305       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0203 12:28:41.240768   13136 command_runner.go:130] ! I0203 12:27:28.130715       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:41.240768   13136 command_runner.go:130] ! I0203 12:27:28.131611       1 shared_informer.go:320] Caches are synced for cronjob
	I0203 12:28:41.240768   13136 command_runner.go:130] ! I0203 12:27:28.137580       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0203 12:28:41.240809   13136 command_runner.go:130] ! I0203 12:27:28.142883       1 shared_informer.go:320] Caches are synced for TTL
	I0203 12:28:41.240809   13136 command_runner.go:130] ! I0203 12:27:28.155436       1 shared_informer.go:320] Caches are synced for daemon sets
	I0203 12:28:41.240866   13136 command_runner.go:130] ! I0203 12:27:28.169742       1 shared_informer.go:320] Caches are synced for crt configmap
	I0203 12:28:41.240866   13136 command_runner.go:130] ! I0203 12:27:28.178458       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0203 12:28:41.240866   13136 command_runner.go:130] ! I0203 12:27:28.179559       1 shared_informer.go:320] Caches are synced for job
	I0203 12:28:41.240866   13136 command_runner.go:130] ! I0203 12:27:28.184280       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0203 12:28:41.240866   13136 command_runner.go:130] ! I0203 12:27:28.184866       1 shared_informer.go:320] Caches are synced for endpoint
	I0203 12:28:41.240936   13136 command_runner.go:130] ! I0203 12:27:28.185203       1 shared_informer.go:320] Caches are synced for persistent volume
	I0203 12:28:41.240936   13136 command_runner.go:130] ! I0203 12:27:28.188183       1 shared_informer.go:320] Caches are synced for disruption
	I0203 12:28:41.240936   13136 command_runner.go:130] ! I0203 12:27:28.191185       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0203 12:28:41.240936   13136 command_runner.go:130] ! I0203 12:27:28.192463       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0203 12:28:41.240997   13136 command_runner.go:130] ! I0203 12:27:28.192932       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0203 12:28:41.240997   13136 command_runner.go:130] ! I0203 12:27:28.195813       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:41.240997   13136 command_runner.go:130] ! I0203 12:27:28.197022       1 shared_informer.go:320] Caches are synced for expand
	I0203 12:28:41.241055   13136 command_runner.go:130] ! I0203 12:27:28.197371       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0203 12:28:41.241055   13136 command_runner.go:130] ! I0203 12:27:28.203607       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0203 12:28:41.241055   13136 command_runner.go:130] ! I0203 12:27:28.205940       1 shared_informer.go:320] Caches are synced for node
	I0203 12:28:41.241055   13136 command_runner.go:130] ! I0203 12:27:28.206428       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0203 12:28:41.241128   13136 command_runner.go:130] ! I0203 12:27:28.206719       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0203 12:28:41.241128   13136 command_runner.go:130] ! I0203 12:27:28.206743       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0203 12:28:41.241128   13136 command_runner.go:130] ! I0203 12:27:28.206759       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0203 12:28:41.241183   13136 command_runner.go:130] ! I0203 12:27:28.207125       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.241183   13136 command_runner.go:130] ! I0203 12:27:28.207167       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.241183   13136 command_runner.go:130] ! I0203 12:27:28.207249       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.241183   13136 command_runner.go:130] ! I0203 12:27:28.207497       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0203 12:28:41.241262   13136 command_runner.go:130] ! I0203 12:27:28.212287       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0203 12:28:41.241262   13136 command_runner.go:130] ! I0203 12:27:28.212651       1 shared_informer.go:320] Caches are synced for taint
	I0203 12:28:41.241301   13136 command_runner.go:130] ! I0203 12:27:28.216545       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0203 12:28:41.241301   13136 command_runner.go:130] ! I0203 12:27:28.213230       1 shared_informer.go:320] Caches are synced for GC
	I0203 12:28:41.241301   13136 command_runner.go:130] ! I0203 12:27:28.220697       1 shared_informer.go:320] Caches are synced for PV protection
	I0203 12:28:41.241301   13136 command_runner.go:130] ! I0203 12:27:28.221685       1 shared_informer.go:320] Caches are synced for namespace
	I0203 12:28:41.241354   13136 command_runner.go:130] ! I0203 12:27:28.223956       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0203 12:28:41.241354   13136 command_runner.go:130] ! I0203 12:27:28.214977       1 shared_informer.go:320] Caches are synced for ephemeral
	I0203 12:28:41.241354   13136 command_runner.go:130] ! I0203 12:27:28.215855       1 shared_informer.go:320] Caches are synced for attach detach
	I0203 12:28:41.241354   13136 command_runner.go:130] ! I0203 12:27:28.229339       1 shared_informer.go:320] Caches are synced for deployment
	I0203 12:28:41.241410   13136 command_runner.go:130] ! I0203 12:27:28.231152       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:41.241410   13136 command_runner.go:130] ! I0203 12:27:28.240053       1 shared_informer.go:320] Caches are synced for stateful set
	I0203 12:28:41.241470   13136 command_runner.go:130] ! I0203 12:27:28.244571       1 shared_informer.go:320] Caches are synced for HPA
	I0203 12:28:41.241470   13136 command_runner.go:130] ! I0203 12:27:28.253632       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0203 12:28:41.241470   13136 command_runner.go:130] ! I0203 12:27:28.253905       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:41.241525   13136 command_runner.go:130] ! I0203 12:27:28.254335       1 shared_informer.go:320] Caches are synced for PVC protection
	I0203 12:28:41.241525   13136 command_runner.go:130] ! I0203 12:27:28.256579       1 shared_informer.go:320] Caches are synced for service account
	I0203 12:28:41.241525   13136 command_runner.go:130] ! I0203 12:27:28.261559       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:41.241525   13136 command_runner.go:130] ! I0203 12:27:28.272196       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.241585   13136 command_runner.go:130] ! I0203 12:27:28.278627       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m02"
	I0203 12:28:41.241585   13136 command_runner.go:130] ! I0203 12:27:28.278875       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m03"
	I0203 12:28:41.241654   13136 command_runner.go:130] ! I0203 12:27:28.279161       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300"
	I0203 12:28:41.241654   13136 command_runner.go:130] ! I0203 12:27:28.279427       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:41.241654   13136 command_runner.go:130] ! I0203 12:27:28.279877       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.241719   13136 command_runner.go:130] ! I0203 12:27:28.279830       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0203 12:28:41.241719   13136 command_runner.go:130] ! I0203 12:27:28.304983       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:41.241719   13136 command_runner.go:130] ! I0203 12:27:28.305231       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0203 12:28:41.241777   13136 command_runner.go:130] ! I0203 12:27:28.305564       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0203 12:28:41.241777   13136 command_runner.go:130] ! I0203 12:27:28.321623       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0203 12:28:41.241877   13136 command_runner.go:130] ! I0203 12:27:28.355620       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.241987   13136 command_runner.go:130] ! I0203 12:27:28.537851       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="345.769991ms"
	I0203 12:28:41.241987   13136 command_runner.go:130] ! I0203 12:27:28.538124       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="123.5µs"
	I0203 12:28:41.242048   13136 command_runner.go:130] ! I0203 12:27:28.549449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="358.01756ms"
	I0203 12:28:41.242048   13136 command_runner.go:130] ! I0203 12:27:28.551039       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="41.301µs"
	I0203 12:28:41.242048   13136 command_runner.go:130] ! I0203 12:27:38.365008       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.242108   13136 command_runner.go:130] ! I0203 12:28:10.033136       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.242108   13136 command_runner.go:130] ! I0203 12:28:10.034663       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:41.242169   13136 command_runner.go:130] ! I0203 12:28:10.065494       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.242169   13136 command_runner.go:130] ! I0203 12:28:13.309331       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.242169   13136 command_runner.go:130] ! I0203 12:28:18.332821       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.242225   13136 command_runner.go:130] ! I0203 12:28:18.352713       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.242225   13136 command_runner.go:130] ! I0203 12:28:18.408588       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="26.468372ms"
	I0203 12:28:41.242225   13136 command_runner.go:130] ! I0203 12:28:18.409083       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="46.101µs"
	I0203 12:28:41.242289   13136 command_runner.go:130] ! I0203 12:28:23.502598       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.242289   13136 command_runner.go:130] ! I0203 12:28:31.524388       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="21.544593ms"
	I0203 12:28:41.242346   13136 command_runner.go:130] ! I0203 12:28:31.524629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="171.802µs"
	I0203 12:28:41.242346   13136 command_runner.go:130] ! I0203 12:28:31.550980       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="91.601µs"
	I0203 12:28:41.242346   13136 command_runner.go:130] ! I0203 12:28:31.616132       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="36.896902ms"
	I0203 12:28:41.242407   13136 command_runner.go:130] ! I0203 12:28:31.618203       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="115.002µs"
	I0203 12:28:41.260313   13136 logs.go:123] Gathering logs for kindnet [644890f5738e] ...
	I0203 12:28:41.260313   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 644890f5738e"
	I0203 12:28:41.290530   13136 command_runner.go:130] ! I0203 12:27:27.922584       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0203 12:28:41.290530   13136 command_runner.go:130] ! I0203 12:27:27.925544       1 main.go:139] hostIP = 172.25.12.244
	I0203 12:28:41.290530   13136 command_runner.go:130] ! podIP = 172.25.12.244
	I0203 12:28:41.290530   13136 command_runner.go:130] ! I0203 12:27:27.925723       1 main.go:148] setting mtu 1500 for CNI 
	I0203 12:28:41.290530   13136 command_runner.go:130] ! I0203 12:27:27.925791       1 main.go:178] kindnetd IP family: "ipv4"
	I0203 12:28:41.290530   13136 command_runner.go:130] ! I0203 12:27:27.925960       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0203 12:28:41.290530   13136 command_runner.go:130] ! I0203 12:27:28.656536       1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
	I0203 12:28:41.290530   13136 command_runner.go:130] ! add table inet kindnet-network-policies
	I0203 12:28:41.290530   13136 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:41.290530   13136 command_runner.go:130] ! , skipping network policies
	I0203 12:28:41.290530   13136 command_runner.go:130] ! W0203 12:27:58.664159       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0203 12:28:41.290530   13136 command_runner.go:130] ! E0203 12:27:58.664461       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I0203 12:28:41.290530   13136 command_runner.go:130] ! I0203 12:28:08.665271       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:28:41.290530   13136 command_runner.go:130] ! I0203 12:28:08.665332       1 main.go:301] handling current node
	I0203 12:28:41.290530   13136 command_runner.go:130] ! I0203 12:28:08.666606       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:41.290530   13136 command_runner.go:130] ! I0203 12:28:08.666704       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:41.291546   13136 command_runner.go:130] ! I0203 12:28:08.667036       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.25.8.35 Flags: [] Table: 0 Realm: 0} 
	I0203 12:28:41.291692   13136 command_runner.go:130] ! I0203 12:28:08.667510       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:41.291692   13136 command_runner.go:130] ! I0203 12:28:08.667530       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:41.291692   13136 command_runner.go:130] ! I0203 12:28:08.668238       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.0.54 Flags: [] Table: 0 Realm: 0} 
	I0203 12:28:41.291770   13136 command_runner.go:130] ! I0203 12:28:18.657872       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:41.291770   13136 command_runner.go:130] ! I0203 12:28:18.658001       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:41.291770   13136 command_runner.go:130] ! I0203 12:28:18.658271       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:28:41.291770   13136 command_runner.go:130] ! I0203 12:28:18.658397       1 main.go:301] handling current node
	I0203 12:28:41.291770   13136 command_runner.go:130] ! I0203 12:28:18.658413       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:41.291770   13136 command_runner.go:130] ! I0203 12:28:18.658420       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:41.291770   13136 command_runner.go:130] ! I0203 12:28:28.657620       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:28:41.291770   13136 command_runner.go:130] ! I0203 12:28:28.658189       1 main.go:301] handling current node
	I0203 12:28:41.291770   13136 command_runner.go:130] ! I0203 12:28:28.658424       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:41.291770   13136 command_runner.go:130] ! I0203 12:28:28.658517       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:41.291916   13136 command_runner.go:130] ! I0203 12:28:28.658702       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:41.291916   13136 command_runner.go:130] ! I0203 12:28:28.659037       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:41.291916   13136 command_runner.go:130] ! I0203 12:28:38.660508       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:28:41.291916   13136 command_runner.go:130] ! I0203 12:28:38.660637       1 main.go:301] handling current node
	I0203 12:28:41.291916   13136 command_runner.go:130] ! I0203 12:28:38.660667       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:41.292010   13136 command_runner.go:130] ! I0203 12:28:38.660675       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:41.292010   13136 command_runner.go:130] ! I0203 12:28:38.661328       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:41.292010   13136 command_runner.go:130] ! I0203 12:28:38.661463       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:41.294432   13136 logs.go:123] Gathering logs for Docker ...
	I0203 12:28:41.294506   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0203 12:28:41.326762   13136 command_runner.go:130] > Feb 03 12:25:59 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:41.326762   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:41.326762   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:41.326853   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:41.326853   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0203 12:28:41.326914   13136 command_runner.go:130] > Feb 03 12:26:00 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:41.326956   13136 command_runner.go:130] > Feb 03 12:26:00 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:41.326956   13136 command_runner.go:130] > Feb 03 12:26:00 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:41.326956   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0203 12:28:41.326956   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0203 12:28:41.327044   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:41.327044   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:41.327044   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:41.327044   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:41.327044   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0203 12:28:41.327044   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:41.327157   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:41.327176   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:41.327176   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 systemd[1]: Starting Docker Application Container Engine...
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[651]: time="2025-02-03T12:26:45.380727146Z" level=info msg="Starting up"
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[651]: time="2025-02-03T12:26:45.381865516Z" level=info msg="containerd not running, starting managed containerd"
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[651]: time="2025-02-03T12:26:45.382773073Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=657
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.412550323Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440135738Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440206542Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440329250Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440352551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441207804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441394816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441695635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441819442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441843144Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441855545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.327770   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.442535887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.327770   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.443428142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.327770   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.446651543Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:41.327946   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.446752549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.328015   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.446913259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:41.328015   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.447005465Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0203 12:28:41.328015   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.447482194Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0203 12:28:41.328082   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.447592401Z" level=info msg="metadata content store policy set" policy=shared
	I0203 12:28:41.328082   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452471104Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0203 12:28:41.328082   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452580211Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0203 12:28:41.328148   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452605613Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0203 12:28:41.328148   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452624714Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0203 12:28:41.328148   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452641915Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0203 12:28:41.328216   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452717520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0203 12:28:41.328216   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453010238Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0203 12:28:41.328216   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453128145Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0203 12:28:41.328282   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453147046Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0203 12:28:41.328282   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453162147Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0203 12:28:41.328348   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453177448Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.328348   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453199850Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.328348   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453215851Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.328415   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453237552Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.328415   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453360460Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.328415   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453415663Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.328415   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453522870Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.328481   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453541271Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.328497   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453563972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328546   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453580773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328546   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453596174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328581   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453611675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328581   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453625276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328581   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453640377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328657   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453653878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328657   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453667779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328657   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453687080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328657   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453703481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328730   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453716682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328730   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453729883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328730   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453743884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328797   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453761485Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0203 12:28:41.328797   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453785086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328797   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453804587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328864   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453818788Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0203 12:28:41.328864   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453867591Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0203 12:28:41.328864   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453971798Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0203 12:28:41.328951   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454021201Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0203 12:28:41.328978   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454132008Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0203 12:28:41.329006   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454147409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.329080   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454163610Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0203 12:28:41.329080   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454175210Z" level=info msg="NRI interface is disabled by configuration."
	I0203 12:28:41.329080   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454622938Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0203 12:28:41.329151   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454857953Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0203 12:28:41.329151   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454980660Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0203 12:28:41.329151   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.455105168Z" level=info msg="containerd successfully booted in 0.044680s"
	I0203 12:28:41.329222   13136 command_runner.go:130] > Feb 03 12:26:46 multinode-749300 dockerd[651]: time="2025-02-03T12:26:46.439313185Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0203 12:28:41.329222   13136 command_runner.go:130] > Feb 03 12:26:46 multinode-749300 dockerd[651]: time="2025-02-03T12:26:46.630975852Z" level=info msg="Loading containers: start."
	I0203 12:28:41.329222   13136 command_runner.go:130] > Feb 03 12:26:46 multinode-749300 dockerd[651]: time="2025-02-03T12:26:46.949194693Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0203 12:28:41.329288   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.095120348Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0203 12:28:41.329288   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.212617937Z" level=info msg="Loading containers: done."
	I0203 12:28:41.329288   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.238410035Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0203 12:28:41.329359   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.238496541Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0203 12:28:41.329359   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.238529943Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0203 12:28:41.329424   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.239396503Z" level=info msg="Daemon has completed initialization"
	I0203 12:28:41.329424   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.279910027Z" level=info msg="API listen on /var/run/docker.sock"
	I0203 12:28:41.329424   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 systemd[1]: Started Docker Application Container Engine.
	I0203 12:28:41.329424   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.280075738Z" level=info msg="API listen on [::]:2376"
	I0203 12:28:41.329493   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.298017161Z" level=info msg="Processing signal 'terminated'"
	I0203 12:28:41.329493   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 systemd[1]: Stopping Docker Application Container Engine...
	I0203 12:28:41.329493   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.300466075Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0203 12:28:41.329493   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.301181479Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0203 12:28:41.329568   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.301265080Z" level=info msg="Daemon shutdown complete"
	I0203 12:28:41.329568   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.301434281Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0203 12:28:41.329568   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 systemd[1]: docker.service: Deactivated successfully.
	I0203 12:28:41.329568   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 systemd[1]: Stopped Docker Application Container Engine.
	I0203 12:28:41.329568   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 systemd[1]: Starting Docker Application Container Engine...
	I0203 12:28:41.329641   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:12.352956833Z" level=info msg="Starting up"
	I0203 12:28:41.329641   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:12.353893039Z" level=info msg="containerd not running, starting managed containerd"
	I0203 12:28:41.329641   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:12.356231552Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1107
	I0203 12:28:41.329705   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.387763834Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0203 12:28:41.329705   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415379693Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0203 12:28:41.329774   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415427893Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0203 12:28:41.329774   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415503993Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0203 12:28:41.329774   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415521293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.329843   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415552594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:41.329843   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415571594Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.329909   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415753695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:41.329909   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415875095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.329909   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415895996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:41.329974   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415907496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.329974   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415998596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.329974   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.416122597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.330066   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419383016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:41.330066   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419448316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.330066   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419602317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:41.330140   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419703417Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0203 12:28:41.330140   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419732118Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0203 12:28:41.330140   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419761418Z" level=info msg="metadata content store policy set" policy=shared
	I0203 12:28:41.330207   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420025019Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0203 12:28:41.330207   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420117020Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0203 12:28:41.330207   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420135220Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0203 12:28:41.330207   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420150320Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0203 12:28:41.330273   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420168320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0203 12:28:41.330273   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420220020Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0203 12:28:41.330273   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420554522Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0203 12:28:41.330345   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420715123Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0203 12:28:41.330345   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420811824Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0203 12:28:41.330414   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420833624Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0203 12:28:41.330414   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420853524Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.330414   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420879824Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.330481   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420897724Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.330481   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420912624Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.330481   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420991825Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.330481   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421007125Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.330548   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421021725Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.330548   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421034325Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.330616   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421059025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330616   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421075725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330616   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421090525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330687   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421104726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330687   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421118126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330687   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421132126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330754   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421150126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330754   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421166226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330754   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421188326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330823   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421206126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330823   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421218626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330823   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421231326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330823   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421244126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330898   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421262126Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0203 12:28:41.330898   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421286927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330898   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421299927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330969   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421316127Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0203 12:28:41.330969   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421657629Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0203 12:28:41.330969   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421699929Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0203 12:28:41.331046   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421719729Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0203 12:28:41.331131   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421738629Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0203 12:28:41.331152   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421749929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421767729Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421781429Z" level=info msg="NRI interface is disabled by configuration."
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422100631Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422251132Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422392333Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422418033Z" level=info msg="containerd successfully booted in 0.035603s"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.403475080Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.431623642Z" level=info msg="Loading containers: start."
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.675130644Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.788922499Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.907280980Z" level=info msg="Loading containers: done."
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.932910027Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.932994128Z" level=info msg="Daemon has completed initialization"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.970542044Z" level=info msg="API listen on /var/run/docker.sock"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.970691945Z" level=info msg="API listen on [::]:2376"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 systemd[1]: Started Docker Application Container Engine.
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Loaded network plugin cni"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Start cri-dockerd grpc backend"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:19Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-58667487b6-zgvmd_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"efcd217a3204d8ee4b03ebb412109a32b1b008fc65b7434e2087e8fa5429c03b\""
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:19Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-v2gkp_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"26e5557dc32ce42e41eb095169017d71cd452b2e90ecede8972ab6dfa8c841ac\""
	I0203 12:28:41.331746   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.731892062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.331746   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.732069764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.331746   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.732104064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331746   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.732632967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331746   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.742524924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.331859   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.742776225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.331897   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.742902026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331939   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.743145327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331939   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787449782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787596483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787637083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787820284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818198959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818289160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818451361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818555561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/264f9c1c2c05f544f10a0af503e7dfb16c8eaf7dab55a12d747c05df02b07807/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d8732fe7d2435b888ee9c1bdc8f366b2cd23fe7a47230b5e0b7e6e97547fb30e/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e2da6b5a5bd1b22ed0d0ef9ab7fd9a0874f1357443511e898b07fbae5f28d3d0/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc833a943f11f228aa4ef7daceca6bf4fd4096e22ee6354cc8afb177b0dc3db5/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.377130176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.378256483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.378462184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.378972087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.423087341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.424963652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.426916563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.427886269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.440196639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.440916544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.442061550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332496   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.442305352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332496   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.453876818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.332496   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.454104020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.332581   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.454340021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.454632323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:25Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474743418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474833119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474852519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474952220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502675379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502746480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502760180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502846980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507587807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507657108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507682008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507809209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c4912e7d3383ee7e383387115cfa625509cdb8edff08db473311607d723e4d67/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1eece224f54eb90d32ca17e53dec80b8ad8db63a733127cae7ce39832c944127/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c682ff8834bf472070d7ef8557ee1391dcfffd86e9b6a29c668eee4fe700e342/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010215801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010492502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010590603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010742104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.013544220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.013678021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.013710621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333142   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.014126823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333142   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145033877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.333142   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145181177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.333142   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145225278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333222   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145314878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333253   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:57.589562586Z" level=info msg="ignoring event" container=edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0203 12:28:41.333297   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:57.590947498Z" level=info msg="shim disconnected" id=edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578 namespace=moby
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:57.591492803Z" level=warning msg="cleaning up after shim disconnected" id=edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578 namespace=moby
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:57.591599004Z" level=info msg="cleaning up dead shim" namespace=moby
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.013597299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.013673700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.013692300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.014212603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223402731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223571532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223587232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223671032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.236644911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.237659918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.237678218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.238007320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:28:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d290c79ddbf8dbaaae0ac6ae29ff1695c351eb244341bb86dfa66bd51e407af5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:28:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ac5f0bf5197cf2f2f9c600a6d9f77ea7775ba4c80a3a3c30272ea8dc42d9f4e2/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.741947665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.742072666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.742088066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.742520068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783254697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783521498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783775700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783932101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.362391   13136 logs.go:123] Gathering logs for etcd [09707a862965] ...
	I0203 12:28:41.362391   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09707a862965"
	I0203 12:28:41.392746   13136 command_runner.go:130] ! {"level":"warn","ts":"2025-02-03T12:27:21.807150Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0203 12:28:41.393649   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.807376Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.25.12.244:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.25.12.244:2380","--initial-cluster=multinode-749300=https://172.25.12.244:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.25.12.244:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.25.12.244:2380","--name=multinode-749300","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0203 12:28:41.393761   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.810076Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0203 12:28:41.393780   13136 command_runner.go:130] ! {"level":"warn","ts":"2025-02-03T12:27:21.810110Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0203 12:28:41.393780   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.810121Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.25.12.244:2380"]}
	I0203 12:28:41.393780   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.810165Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0203 12:28:41.393860   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.813162Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.25.12.244:2379"]}
	I0203 12:28:41.393948   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.815738Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-749300","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.25.12.244:2380"],"listen-peer-urls":["https://172.25.12.244:2380"],"advertise-client-urls":["https://172.25.12.244:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.12.244:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-c
luster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0203 12:28:41.394013   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.836502Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"19.618913ms"}
	I0203 12:28:41.394013   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.860600Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0203 12:28:41.394075   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.876663Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"bd3b09816c9d03a4","local-member-id":"aee9b6e79987349e","commit-index":2011}
	I0203 12:28:41.394075   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.879122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e switched to configuration voters=()"}
	I0203 12:28:41.394139   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.881202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became follower at term 2"}
	I0203 12:28:41.394139   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.882322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aee9b6e79987349e [peers: [], term: 2, commit: 2011, applied: 0, lastindex: 2011, lastterm: 2]"}
	I0203 12:28:41.394139   13136 command_runner.go:130] ! {"level":"warn","ts":"2025-02-03T12:27:21.896121Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0203 12:28:41.394209   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.900153Z","caller":"mvcc/kvstore.go:346","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1395}
	I0203 12:28:41.394209   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.903670Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":1746}
	I0203 12:28:41.394271   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.910428Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0203 12:28:41.394271   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.919884Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"aee9b6e79987349e","timeout":"7s"}
	I0203 12:28:41.394335   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.920678Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"aee9b6e79987349e"}
	I0203 12:28:41.394335   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.922572Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"aee9b6e79987349e","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	I0203 12:28:41.394335   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.923543Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	I0203 12:28:41.394404   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924198Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0203 12:28:41.394404   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924288Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0203 12:28:41.394466   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924338Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0203 12:28:41.394466   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e switched to configuration voters=(12603806138002519198)"}
	I0203 12:28:41.394535   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.925111Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bd3b09816c9d03a4","local-member-id":"aee9b6e79987349e","added-peer-id":"aee9b6e79987349e","added-peer-peer-urls":["https://172.25.1.53:2380"]}
	I0203 12:28:41.394535   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.926083Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bd3b09816c9d03a4","local-member-id":"aee9b6e79987349e","cluster-version":"3.5"}
	I0203 12:28:41.394600   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.926140Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0203 12:28:41.394600   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.926075Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0203 12:28:41.394664   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.931282Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0203 12:28:41.394664   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.932289Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.25.12.244:2380"}
	I0203 12:28:41.394664   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.932461Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.25.12.244:2380"}
	I0203 12:28:41.394761   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.932990Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aee9b6e79987349e","initial-advertise-peer-urls":["https://172.25.12.244:2380"],"listen-peer-urls":["https://172.25.12.244:2380"],"advertise-client-urls":["https://172.25.12.244:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.12.244:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0203 12:28:41.394761   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.933175Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0203 12:28:41.394827   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e is starting a new election at term 2"}
	I0203 12:28:41.394827   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became pre-candidate at term 2"}
	I0203 12:28:41.394891   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e received MsgPreVoteResp from aee9b6e79987349e at term 2"}
	I0203 12:28:41.394891   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became candidate at term 3"}
	I0203 12:28:41.394891   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e received MsgVoteResp from aee9b6e79987349e at term 3"}
	I0203 12:28:41.394960   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became leader at term 3"}
	I0203 12:28:41.394960   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aee9b6e79987349e elected leader aee9b6e79987349e at term 3"}
	I0203 12:28:41.395023   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.298589Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aee9b6e79987349e","local-member-attributes":"{Name:multinode-749300 ClientURLs:[https://172.25.12.244:2379]}","request-path":"/0/members/aee9b6e79987349e/attributes","cluster-id":"bd3b09816c9d03a4","publish-timeout":"7s"}
	I0203 12:28:41.395023   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.298815Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0203 12:28:41.395086   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.299061Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0203 12:28:41.395086   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.301663Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0203 12:28:41.395086   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.301847Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0203 12:28:41.395156   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.306842Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0203 12:28:41.395156   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.310094Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0203 12:28:41.395156   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.312993Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0203 12:28:41.395218   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.319087Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.12.244:2379"}
	I0203 12:28:41.405384   13136 logs.go:123] Gathering logs for coredns [fe91a8d012ae] ...
	I0203 12:28:41.405384   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe91a8d012ae"
	I0203 12:28:41.434666   13136 command_runner.go:130] > .:53
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3e8130cfa8e96169e54fdb81903f9b4680c96074b93281de316a617894d613269c265db78cbf1be00f04df6f27627d689838921ad115c7f1fadc26b632a43f17
	I0203 12:28:41.434666   13136 command_runner.go:130] > CoreDNS-1.11.3
	I0203 12:28:41.434666   13136 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 127.0.0.1:49376 - 54533 "HINFO IN 5545318737342419956.4498205497283969299. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.271697251s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:43143 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000594006s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:44943 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.183348242s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:36646 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.156236585s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:58135 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.085964402s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.0.3:55647 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000429704s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.0.3:43653 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000173402s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.0.3:39125 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000093801s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.0.3:43285 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000234602s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:49861 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157602s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:59079 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024886436s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:56014 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155402s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:49501 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115101s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:59809 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.029540479s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:45190 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184901s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:58561 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000207002s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:54547 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108101s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.0.3:52767 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140901s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.0.3:48199 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000275502s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.0.3:40769 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194202s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.0.3:56613 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000241303s
	I0203 12:28:41.435194   13136 command_runner.go:130] > [INFO] 10.244.0.3:36390 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000127501s
	I0203 12:28:41.435194   13136 command_runner.go:130] > [INFO] 10.244.0.3:49253 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150501s
	I0203 12:28:41.435194   13136 command_runner.go:130] > [INFO] 10.244.0.3:53291 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115601s
	I0203 12:28:41.435194   13136 command_runner.go:130] > [INFO] 10.244.0.3:37098 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000782s
	I0203 12:28:41.435194   13136 command_runner.go:130] > [INFO] 10.244.1.2:47927 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154002s
	I0203 12:28:41.435194   13136 command_runner.go:130] > [INFO] 10.244.1.2:49855 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156202s
	I0203 12:28:41.435300   13136 command_runner.go:130] > [INFO] 10.244.1.2:51176 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114201s
	I0203 12:28:41.435300   13136 command_runner.go:130] > [INFO] 10.244.1.2:45626 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156701s
	I0203 12:28:41.435300   13136 command_runner.go:130] > [INFO] 10.244.0.3:33142 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141402s
	I0203 12:28:41.435300   13136 command_runner.go:130] > [INFO] 10.244.0.3:36637 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000249602s
	I0203 12:28:41.435300   13136 command_runner.go:130] > [INFO] 10.244.0.3:34293 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135301s
	I0203 12:28:41.435387   13136 command_runner.go:130] > [INFO] 10.244.0.3:59245 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112701s
	I0203 12:28:41.435387   13136 command_runner.go:130] > [INFO] 10.244.1.2:56139 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200702s
	I0203 12:28:41.435387   13136 command_runner.go:130] > [INFO] 10.244.1.2:53567 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131301s
	I0203 12:28:41.435387   13136 command_runner.go:130] > [INFO] 10.244.1.2:55778 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000182502s
	I0203 12:28:41.435387   13136 command_runner.go:130] > [INFO] 10.244.1.2:53486 - 5 "PTR IN 1.0.25.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000163702s
	I0203 12:28:41.435479   13136 command_runner.go:130] > [INFO] 10.244.0.3:52745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191702s
	I0203 12:28:41.435479   13136 command_runner.go:130] > [INFO] 10.244.0.3:38587 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132301s
	I0203 12:28:41.435479   13136 command_runner.go:130] > [INFO] 10.244.0.3:53685 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078101s
	I0203 12:28:41.435479   13136 command_runner.go:130] > [INFO] 10.244.0.3:38406 - 5 "PTR IN 1.0.25.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000076301s
	I0203 12:28:41.435479   13136 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0203 12:28:41.435479   13136 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0203 12:28:41.438438   13136 logs.go:123] Gathering logs for kube-controller-manager [8ade10c0fb09] ...
	I0203 12:28:41.438517   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ade10c0fb09"
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:50.328199       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:50.683234       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:50.683563       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:50.687907       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:50.687950       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:50.687972       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:50.687984       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:55.071680       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:55.072106       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:55.089226       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:55.089889       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:55.091177       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:55.113934       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:55.114137       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:55.114294       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:55.115111       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:55.143403       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0203 12:28:41.470481   13136 command_runner.go:130] ! I0203 12:04:55.146241       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0203 12:28:41.470481   13136 command_runner.go:130] ! I0203 12:04:55.146450       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0203 12:28:41.470481   13136 command_runner.go:130] ! I0203 12:04:55.167456       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0203 12:28:41.470535   13136 command_runner.go:130] ! I0203 12:04:55.168207       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0203 12:28:41.470535   13136 command_runner.go:130] ! I0203 12:04:55.169697       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0203 12:28:41.470535   13136 command_runner.go:130] ! I0203 12:04:55.170035       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0203 12:28:41.470535   13136 command_runner.go:130] ! I0203 12:04:55.172429       1 shared_informer.go:320] Caches are synced for tokens
	I0203 12:28:41.470535   13136 command_runner.go:130] ! W0203 12:04:55.207419       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0203 12:28:41.470535   13136 command_runner.go:130] ! I0203 12:04:55.220184       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0203 12:28:41.470535   13136 command_runner.go:130] ! I0203 12:04:55.220335       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0203 12:28:41.471067   13136 command_runner.go:130] ! I0203 12:04:55.220802       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0203 12:28:41.471067   13136 command_runner.go:130] ! I0203 12:04:55.220818       1 shared_informer.go:313] Waiting for caches to sync for node
	I0203 12:28:41.471067   13136 command_runner.go:130] ! I0203 12:04:55.236689       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0203 12:28:41.471067   13136 command_runner.go:130] ! I0203 12:04:55.236985       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0203 12:28:41.471290   13136 command_runner.go:130] ! I0203 12:04:55.237003       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0203 12:28:41.471290   13136 command_runner.go:130] ! I0203 12:04:55.260414       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0203 12:28:41.471358   13136 command_runner.go:130] ! I0203 12:04:55.260996       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0203 12:28:41.471358   13136 command_runner.go:130] ! I0203 12:04:55.261428       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0203 12:28:41.471358   13136 command_runner.go:130] ! I0203 12:04:55.289640       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0203 12:28:41.471358   13136 command_runner.go:130] ! I0203 12:04:55.289893       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0203 12:28:41.471358   13136 command_runner.go:130] ! I0203 12:04:55.290571       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0203 12:28:41.471358   13136 command_runner.go:130] ! I0203 12:04:55.290736       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0203 12:28:41.471358   13136 command_runner.go:130] ! I0203 12:04:55.314846       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0203 12:28:41.471358   13136 command_runner.go:130] ! I0203 12:04:55.315076       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0203 12:28:41.471358   13136 command_runner.go:130] ! I0203 12:04:55.315101       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0203 12:28:41.471358   13136 command_runner.go:130] ! I0203 12:04:55.319462       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0203 12:28:41.471901   13136 command_runner.go:130] ! I0203 12:04:55.319527       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0203 12:28:41.471901   13136 command_runner.go:130] ! I0203 12:04:55.319535       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0203 12:28:41.471901   13136 command_runner.go:130] ! I0203 12:04:55.319689       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0203 12:28:41.471965   13136 command_runner.go:130] ! I0203 12:04:55.319723       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0203 12:28:41.471965   13136 command_runner.go:130] ! I0203 12:04:55.319733       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0203 12:28:41.471965   13136 command_runner.go:130] ! I0203 12:04:55.446823       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0203 12:28:41.471965   13136 command_runner.go:130] ! I0203 12:04:55.446851       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0203 12:28:41.472032   13136 command_runner.go:130] ! I0203 12:04:55.446960       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0203 12:28:41.472032   13136 command_runner.go:130] ! I0203 12:04:55.446972       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0203 12:28:41.472032   13136 command_runner.go:130] ! I0203 12:04:55.579930       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0203 12:28:41.472032   13136 command_runner.go:130] ! I0203 12:04:55.580047       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0203 12:28:41.472096   13136 command_runner.go:130] ! I0203 12:04:55.580079       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0203 12:28:41.472096   13136 command_runner.go:130] ! I0203 12:04:55.730127       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0203 12:28:41.472096   13136 command_runner.go:130] ! I0203 12:04:55.730301       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0203 12:28:41.472096   13136 command_runner.go:130] ! I0203 12:04:55.730314       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0203 12:28:41.472156   13136 command_runner.go:130] ! I0203 12:04:55.889482       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0203 12:28:41.472156   13136 command_runner.go:130] ! I0203 12:04:55.889843       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0203 12:28:41.472156   13136 command_runner.go:130] ! I0203 12:04:55.889907       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0203 12:28:41.472156   13136 command_runner.go:130] ! I0203 12:04:56.030244       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0203 12:28:41.472156   13136 command_runner.go:130] ! I0203 12:04:56.030535       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0203 12:28:41.472225   13136 command_runner.go:130] ! I0203 12:04:56.030566       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.182222       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.183153       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.183191       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.226256       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.226280       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.226330       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.226371       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.226410       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.382971       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.383201       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.383218       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.687449       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.687532       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.687548       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.832606       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.832640       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.832542       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.984351       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.984538       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.984550       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:57.130440       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:57.131375       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:57.131428       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:57.284265       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:57.284330       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:57.284343       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:57.431498       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:57.431577       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:57.432308       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:57.580329       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:57.580661       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0203 12:28:41.476881   13136 command_runner.go:130] ! I0203 12:04:57.580693       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:57.730504       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:57.730629       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:57.730638       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:57.730646       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:57.730719       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:57.730820       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:57.880536       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:57.880666       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:57.881079       1 shared_informer.go:313] Waiting for caches to sync for job
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.186601       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.186797       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187086       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! W0203 12:04:58.187187       1 shared_informer.go:597] resyncPeriod 18h8m42.862796871s is smaller than resyncCheckPeriod 21h1m9.302357504s and the informer has already started. Changing it to 21h1m9.302357504s
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187252       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187334       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187356       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187374       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187391       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187427       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187455       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! W0203 12:04:58.187474       1 shared_informer.go:597] resyncPeriod 19h41m25.464232572s is smaller than resyncCheckPeriod 21h1m9.302357504s and the informer has already started. Changing it to 21h1m9.302357504s
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187523       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187588       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187662       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187679       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187699       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187967       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.188030       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0203 12:28:41.477472   13136 command_runner.go:130] ! I0203 12:04:58.188069       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0203 12:28:41.477509   13136 command_runner.go:130] ! I0203 12:04:58.188097       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.188127       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.188181       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.188248       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.188271       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.188294       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.434011       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.434132       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.434145       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.476316       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.478848       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.478330       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.478362       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.478346       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.479085       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.478432       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.479097       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.478442       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.478482       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.479316       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.478490       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.478533       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.630437       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.630476       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.630884       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.630985       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.825850       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.826005       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:59.025218       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:59.025576       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:59.025879       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:59.026140       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:59.076054       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:59.076201       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.229685       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.229852       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.384463       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.384562       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.384584       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.384709       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.384734       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.531643       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.535171       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.535208       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.555530       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.582679       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300\" does not exist"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.593117       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.615597       1 shared_informer.go:320] Caches are synced for expand
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.619951       1 shared_informer.go:320] Caches are synced for taint
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.620233       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.621144       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.621999       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.620965       1 shared_informer.go:320] Caches are synced for node
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.622115       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.622196       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.622213       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.622220       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.627214       1 shared_informer.go:320] Caches are synced for disruption
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.627299       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.627517       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.630821       1 shared_informer.go:320] Caches are synced for persistent volume
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.631018       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.631607       1 shared_informer.go:320] Caches are synced for PV protection
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.632152       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.632358       1 shared_informer.go:320] Caches are synced for service account
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.632692       1 shared_informer.go:320] Caches are synced for cronjob
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.632840       1 shared_informer.go:320] Caches are synced for TTL
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.634133       1 shared_informer.go:320] Caches are synced for GC
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.634183       1 shared_informer.go:320] Caches are synced for namespace
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.637337       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.637530       1 shared_informer.go:320] Caches are synced for crt configmap
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.644447       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300" podCIDRs=["10.244.0.0/24"]
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.644496       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.644518       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.647453       1 shared_informer.go:320] Caches are synced for deployment
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.647468       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.661087       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.662500       1 shared_informer.go:320] Caches are synced for ephemeral
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.679063       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.679241       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.679489       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.679271       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.680515       1 shared_informer.go:320] Caches are synced for daemon sets
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.680894       1 shared_informer.go:320] Caches are synced for stateful set
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.682157       1 shared_informer.go:320] Caches are synced for job
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.686733       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.688328       1 shared_informer.go:320] Caches are synced for HPA
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.688383       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.697934       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.698063       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.688399       1 shared_informer.go:320] Caches are synced for PVC protection
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.688409       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.688419       1 shared_informer.go:320] Caches are synced for attach detach
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.688482       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.697636       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.697649       1 shared_informer.go:320] Caches are synced for endpoint
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.714625       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.714677       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.714688       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:05:00.046777       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:05:00.818009       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="311.273381ms"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:05:00.848718       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="30.361418ms"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:05:00.848801       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.501µs"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:05:01.040466       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="91.174094ms"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:05:01.060761       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.181113ms"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:05:01.062232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="51.701µs"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:05:21.819966       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:05:21.843034       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.479281   13136 command_runner.go:130] ! I0203 12:05:21.853094       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="295.503µs"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:05:21.889706       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="83.9µs"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:05:23.170298       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="56.1µs"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:05:24.187762       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="23.236374ms"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:05:24.188513       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="90.9µs"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:05:24.626780       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:05:26.205271       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:07:57.197252       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m02\" does not exist"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:07:57.213772       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300-m02" podCIDRs=["10.244.1.0/24"]
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:07:57.214096       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:07:57.214387       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:07:57.243166       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:07:57.578919       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:07:58.163164       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:07:59.655130       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:07:59.772999       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:07.534314       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:26.797682       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:26.797755       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:26.813836       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:28.192212       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:29.680135       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:30.702586       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:51.029918       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="72.629315ms"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:51.048475       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="16.732326ms"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:51.049169       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="396.601µs"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:51.058159       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="35.9µs"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:51.069790       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="40.1µs"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:53.787260       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="12.580521ms"
	I0203 12:28:41.479889   13136 command_runner.go:130] ! I0203 12:08:53.787659       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="70.201µs"
	I0203 12:28:41.479924   13136 command_runner.go:130] ! I0203 12:08:53.939078       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="12.55302ms"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:08:53.939506       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="33.801µs"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:08:58.516195       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:09:01.710207       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:30.158978       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m03\" does not exist"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:30.160493       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:30.187436       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300-m03" podCIDRs=["10.244.2.0/24"]
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:30.187486       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:30.187520       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:30.195215       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:30.643712       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:31.194036       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:34.733168       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:34.818129       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:40.541982       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:59.598308       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:59.598384       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:59.613509       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:59.761059       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:13:01.072377       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:13:02.975699       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:16:00.817386       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:17:16.830447       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:18:09.728117       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:20:44.872410       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:20:44.874163       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:20:44.902212       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:20:50.011997       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:21:07.487830       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:22:48.017949       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:22:48.044428       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480601   13136 command_runner.go:130] ! I0203 12:22:52.915959       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:41.480634   13136 command_runner.go:130] ! I0203 12:22:58.370520       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:22:58.373994       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m03\" does not exist"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:22:58.409838       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300-m03" podCIDRs=["10.244.3.0/24"]
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:22:58.410167       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! E0203 12:22:58.438530       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-749300-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-749300-m03" podCIDRs=["10.244.4.0/24"]
	I0203 12:28:41.480696   13136 command_runner.go:130] ! E0203 12:22:58.438947       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-749300-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! E0203 12:22:58.439229       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-749300-m03': failed to patch node CIDR: Node \"multinode-749300-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:22:58.439401       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:22:58.444440       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:22:58.960922       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:22:59.994381       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:23:08.704715       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:23:13.216732       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:23:13.218582       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:23:13.233034       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:23:14.968424       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:23:15.606788       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:24:50.048901       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:24:50.049506       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:24:50.231618       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:24:55.449570       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.502967   13136 logs.go:123] Gathering logs for container status ...
	I0203 12:28:41.502967   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 12:28:41.569088   13136 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0203 12:28:41.569088   13136 command_runner.go:130] > edb5f00f10420       c69fa2e9cbf5f                                                                                         11 seconds ago       Running             coredns                   1                   ac5f0bf5197cf       coredns-668d6bf9bc-v2gkp
	I0203 12:28:41.569088   13136 command_runner.go:130] > 0ff3e07f2982f       8c811b4aec35f                                                                                         11 seconds ago       Running             busybox                   1                   d290c79ddbf8d       busybox-58667487b6-zgvmd
	I0203 12:28:41.569088   13136 command_runner.go:130] > 7cbc7a552a4c3       6e38f40d628db                                                                                         31 seconds ago       Running             storage-provisioner       2                   1eece224f54eb       storage-provisioner
	I0203 12:28:41.569088   13136 command_runner.go:130] > 644890f5738e5       d300845f67aeb                                                                                         About a minute ago   Running             kindnet-cni               1                   c682ff8834bf4       kindnet-h6m57
	I0203 12:28:41.569088   13136 command_runner.go:130] > edf3d4284acbb       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   1eece224f54eb       storage-provisioner
	I0203 12:28:41.569088   13136 command_runner.go:130] > cf33452e72443       e29f9c7391fd9                                                                                         About a minute ago   Running             kube-proxy                1                   c4912e7d3383e       kube-proxy-9g92t
	I0203 12:28:41.569088   13136 command_runner.go:130] > 09707a8629658       a9e7e6b294baf                                                                                         About a minute ago   Running             etcd                      0                   fc833a943f11f       etcd-multinode-749300
	I0203 12:28:41.569088   13136 command_runner.go:130] > 2e43c2ecb4a92       2b0d6572d062c                                                                                         About a minute ago   Running             kube-scheduler            1                   e2da6b5a5bd1b       kube-scheduler-multinode-749300
	I0203 12:28:41.569088   13136 command_runner.go:130] > fa5ab1df89857       019ee182b58e2                                                                                         About a minute ago   Running             kube-controller-manager   1                   d8732fe7d2435       kube-controller-manager-multinode-749300
	I0203 12:28:41.569088   13136 command_runner.go:130] > 6c19e0a0ba9c0       95c0bda56fc4d                                                                                         About a minute ago   Running             kube-apiserver            0                   264f9c1c2c05f       kube-apiserver-multinode-749300
	I0203 12:28:41.569088   13136 command_runner.go:130] > f42690726d50f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   efcd217a3204d       busybox-58667487b6-zgvmd
	I0203 12:28:41.569088   13136 command_runner.go:130] > fe91a8d012aee       c69fa2e9cbf5f                                                                                         23 minutes ago       Exited              coredns                   0                   26e5557dc32ce       coredns-668d6bf9bc-v2gkp
	I0203 12:28:41.569088   13136 command_runner.go:130] > fab2d9be6b5c7       kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26              23 minutes ago       Exited              kindnet-cni               0                   cb49b32ba0852       kindnet-h6m57
	I0203 12:28:41.569088   13136 command_runner.go:130] > c6dc514e98f69       e29f9c7391fd9                                                                                         23 minutes ago       Exited              kube-proxy                0                   1ff01fa7d8c67       kube-proxy-9g92t
	I0203 12:28:41.569088   13136 command_runner.go:130] > 8ade10c0fb096       019ee182b58e2                                                                                         23 minutes ago       Exited              kube-controller-manager   0                   b1b473818438d       kube-controller-manager-multinode-749300
	I0203 12:28:41.569088   13136 command_runner.go:130] > 88c40ca9aa3cb       2b0d6572d062c                                                                                         23 minutes ago       Exited              kube-scheduler            0                   d8d9e598659ff       kube-scheduler-multinode-749300
	I0203 12:28:44.072206   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods
	I0203 12:28:44.072206   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:44.072206   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:44.072206   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:44.078329   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:44.078329   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:44.078329   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:44.078329   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:44 GMT
	I0203 12:28:44.078329   13136 round_trippers.go:580]     Audit-Id: a5ed77d1-f712-4996-9675-6c8567838a53
	I0203 12:28:44.078329   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:44.078329   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:44.078329   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:44.079663   13136 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1975"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1962","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90284 chars]
	I0203 12:28:44.082964   13136 system_pods.go:59] 12 kube-system pods found
	I0203 12:28:44.082964   13136 system_pods.go:61] "coredns-668d6bf9bc-v2gkp" [c94a77a3-456e-41d7-b9ad-7aa97e0264a7] Running
	I0203 12:28:44.082964   13136 system_pods.go:61] "etcd-multinode-749300" [a956084b-f454-4ef5-8fed-7c189cb74ab0] Running
	I0203 12:28:44.083492   13136 system_pods.go:61] "kindnet-bckxx" [006a41d1-55d5-479a-856f-5670f4ae6588] Running
	I0203 12:28:44.083492   13136 system_pods.go:61] "kindnet-dc9wq" [debecd3d-64fd-46e8-8d28-ca97e75cfcfe] Running
	I0203 12:28:44.083492   13136 system_pods.go:61] "kindnet-h6m57" [67c155d5-fb9b-42f5-8e64-865c44a5d4e6] Running
	I0203 12:28:44.083492   13136 system_pods.go:61] "kube-apiserver-multinode-749300" [72513861-07f4-4533-8f55-8b3cce215b4c] Running
	I0203 12:28:44.083492   13136 system_pods.go:61] "kube-controller-manager-multinode-749300" [63c0818c-a0e6-40d1-b0c4-1cd633c91afb] Running
	I0203 12:28:44.083492   13136 system_pods.go:61] "kube-proxy-9g92t" [1709b874-4fee-41f5-8d30-24912b2fa725] Running
	I0203 12:28:44.083492   13136 system_pods.go:61] "kube-proxy-ggnq7" [63bc9e77-90e3-40c5-9b49-e95d2bfd7426] Running
	I0203 12:28:44.083492   13136 system_pods.go:61] "kube-proxy-w8wrd" [f81878fa-528f-4bdf-90ec-83f54166370e] Running
	I0203 12:28:44.083492   13136 system_pods.go:61] "kube-scheduler-multinode-749300" [8e4c1052-9dca-466d-833b-eff318b977d7] Running
	I0203 12:28:44.083492   13136 system_pods.go:61] "storage-provisioner" [4c991afa-7bb0-4d52-bded-22d68037b5ae] Running
	I0203 12:28:44.083492   13136 system_pods.go:74] duration metric: took 3.7223887s to wait for pod list to return data ...
	I0203 12:28:44.083598   13136 default_sa.go:34] waiting for default service account to be created ...
	I0203 12:28:44.083667   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/default/serviceaccounts
	I0203 12:28:44.083667   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:44.083667   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:44.083667   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:44.089235   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:44.089235   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:44.089235   13136 round_trippers.go:580]     Content-Length: 262
	I0203 12:28:44.089235   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:44 GMT
	I0203 12:28:44.089235   13136 round_trippers.go:580]     Audit-Id: e99cee76-01fc-4f73-ba57-8c596bdb4e65
	I0203 12:28:44.089235   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:44.089235   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:44.089235   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:44.089235   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:44.089235   13136 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1975"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6fd4ae1e-3802-4893-86a4-85da162d717d","resourceVersion":"329","creationTimestamp":"2025-02-03T12:04:59Z"}}]}
	I0203 12:28:44.089783   13136 default_sa.go:45] found service account: "default"
	I0203 12:28:44.089783   13136 default_sa.go:55] duration metric: took 6.1846ms for default service account to be created ...
	I0203 12:28:44.089783   13136 system_pods.go:116] waiting for k8s-apps to be running ...
	I0203 12:28:44.089919   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods
	I0203 12:28:44.089967   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:44.089967   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:44.089967   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:44.093810   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:44.093810   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:44.093810   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:44.093810   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:44 GMT
	I0203 12:28:44.093810   13136 round_trippers.go:580]     Audit-Id: cae25a14-0418-4eb6-b37d-108d48bbba9e
	I0203 12:28:44.093810   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:44.093810   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:44.093810   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:44.094921   13136 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1975"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1962","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90284 chars]
	I0203 12:28:44.098764   13136 system_pods.go:86] 12 kube-system pods found
	I0203 12:28:44.098764   13136 system_pods.go:89] "coredns-668d6bf9bc-v2gkp" [c94a77a3-456e-41d7-b9ad-7aa97e0264a7] Running
	I0203 12:28:44.098764   13136 system_pods.go:89] "etcd-multinode-749300" [a956084b-f454-4ef5-8fed-7c189cb74ab0] Running
	I0203 12:28:44.098764   13136 system_pods.go:89] "kindnet-bckxx" [006a41d1-55d5-479a-856f-5670f4ae6588] Running
	I0203 12:28:44.098764   13136 system_pods.go:89] "kindnet-dc9wq" [debecd3d-64fd-46e8-8d28-ca97e75cfcfe] Running
	I0203 12:28:44.098764   13136 system_pods.go:89] "kindnet-h6m57" [67c155d5-fb9b-42f5-8e64-865c44a5d4e6] Running
	I0203 12:28:44.098764   13136 system_pods.go:89] "kube-apiserver-multinode-749300" [72513861-07f4-4533-8f55-8b3cce215b4c] Running
	I0203 12:28:44.098764   13136 system_pods.go:89] "kube-controller-manager-multinode-749300" [63c0818c-a0e6-40d1-b0c4-1cd633c91afb] Running
	I0203 12:28:44.098764   13136 system_pods.go:89] "kube-proxy-9g92t" [1709b874-4fee-41f5-8d30-24912b2fa725] Running
	I0203 12:28:44.098764   13136 system_pods.go:89] "kube-proxy-ggnq7" [63bc9e77-90e3-40c5-9b49-e95d2bfd7426] Running
	I0203 12:28:44.098764   13136 system_pods.go:89] "kube-proxy-w8wrd" [f81878fa-528f-4bdf-90ec-83f54166370e] Running
	I0203 12:28:44.098764   13136 system_pods.go:89] "kube-scheduler-multinode-749300" [8e4c1052-9dca-466d-833b-eff318b977d7] Running
	I0203 12:28:44.098764   13136 system_pods.go:89] "storage-provisioner" [4c991afa-7bb0-4d52-bded-22d68037b5ae] Running
	I0203 12:28:44.098764   13136 system_pods.go:126] duration metric: took 8.9813ms to wait for k8s-apps to be running ...
	I0203 12:28:44.099360   13136 system_svc.go:44] waiting for kubelet service to be running ....
	I0203 12:28:44.106204   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 12:28:44.134085   13136 system_svc.go:56] duration metric: took 33.9378ms WaitForService to wait for kubelet
	I0203 12:28:44.134085   13136 kubeadm.go:582] duration metric: took 1m13.9269875s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 12:28:44.134085   13136 node_conditions.go:102] verifying NodePressure condition ...
	I0203 12:28:44.134200   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes
	I0203 12:28:44.134305   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:44.134305   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:44.134305   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:44.137558   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:44.137771   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:44.137771   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:44.137771   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:44 GMT
	I0203 12:28:44.137771   13136 round_trippers.go:580]     Audit-Id: f9be6b6b-a640-40be-9a48-ed837033e5aa
	I0203 12:28:44.137771   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:44.137771   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:44.137771   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:44.138232   13136 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1975"},"items":[{"metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16254 chars]
	I0203 12:28:44.139139   13136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 12:28:44.139238   13136 node_conditions.go:123] node cpu capacity is 2
	I0203 12:28:44.139238   13136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 12:28:44.139238   13136 node_conditions.go:123] node cpu capacity is 2
	I0203 12:28:44.139238   13136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 12:28:44.139238   13136 node_conditions.go:123] node cpu capacity is 2
	I0203 12:28:44.139238   13136 node_conditions.go:105] duration metric: took 5.1531ms to run NodePressure ...
	I0203 12:28:44.139238   13136 start.go:241] waiting for startup goroutines ...
	I0203 12:28:44.139238   13136 start.go:246] waiting for cluster config update ...
	I0203 12:28:44.139341   13136 start.go:255] writing updated cluster config ...
	I0203 12:28:44.143571   13136 out.go:201] 
	I0203 12:28:44.145942   13136 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:28:44.160345   13136 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:28:44.161389   13136 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\config.json ...
	I0203 12:28:44.166560   13136 out.go:177] * Starting "multinode-749300-m02" worker node in "multinode-749300" cluster
	I0203 12:28:44.168685   13136 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 12:28:44.169315   13136 cache.go:56] Caching tarball of preloaded images
	I0203 12:28:44.169629   13136 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 12:28:44.169829   13136 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0203 12:28:44.169994   13136 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\config.json ...
	I0203 12:28:44.171870   13136 start.go:360] acquireMachinesLock for multinode-749300-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 12:28:44.172014   13136 start.go:364] duration metric: took 144µs to acquireMachinesLock for "multinode-749300-m02"
	I0203 12:28:44.172172   13136 start.go:96] Skipping create...Using existing machine configuration
	I0203 12:28:44.172172   13136 fix.go:54] fixHost starting: m02
	I0203 12:28:44.172637   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:28:46.223651   13136 main.go:141] libmachine: [stdout =====>] : Off
	
	I0203 12:28:46.223651   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:28:46.223651   13136 fix.go:112] recreateIfNeeded on multinode-749300-m02: state=Stopped err=<nil>
	W0203 12:28:46.223761   13136 fix.go:138] unexpected machine state, will restart: <nil>
	I0203 12:28:46.227745   13136 out.go:177] * Restarting existing hyperv VM for "multinode-749300-m02" ...
	I0203 12:28:46.229652   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-749300-m02
	I0203 12:28:49.145103   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:28:49.145103   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:28:49.145103   13136 main.go:141] libmachine: Waiting for host to start...
	I0203 12:28:49.145183   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:28:51.223125   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:28:51.223125   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:28:51.223125   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:28:53.527445   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:28:53.527445   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:28:54.527815   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:28:56.555048   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:28:56.555901   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:28:56.555901   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:28:58.853976   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:28:58.854723   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:28:59.855621   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:01.864711   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:01.864711   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:01.864711   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:04.172300   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:29:04.172300   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:05.172762   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:07.213673   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:07.214760   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:07.214965   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:09.538187   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:29:09.538187   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:10.539340   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:12.563875   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:12.564439   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:12.564439   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:14.994260   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:29:14.995107   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:14.997356   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:16.989482   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:16.989482   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:16.989482   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:19.339369   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:29:19.339599   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:19.339884   13136 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\config.json ...
	I0203 12:29:19.342720   13136 machine.go:93] provisionDockerMachine start ...
	I0203 12:29:19.342866   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:21.347660   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:21.347660   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:21.347660   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:23.680807   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:29:23.681720   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:23.685734   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:29:23.685734   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.83 22 <nil> <nil>}
	I0203 12:29:23.686260   13136 main.go:141] libmachine: About to run SSH command:
	hostname
	I0203 12:29:23.829909   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0203 12:29:23.829994   13136 buildroot.go:166] provisioning hostname "multinode-749300-m02"
	I0203 12:29:23.830070   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:25.785508   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:25.785508   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:25.785879   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:28.126150   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:29:28.126150   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:28.132396   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:29:28.133188   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.83 22 <nil> <nil>}
	I0203 12:29:28.133188   13136 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-749300-m02 && echo "multinode-749300-m02" | sudo tee /etc/hostname
	I0203 12:29:28.297595   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-749300-m02
	
	I0203 12:29:28.297595   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:30.260773   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:30.260773   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:30.260773   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:32.645244   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:29:32.645244   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:32.649090   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:29:32.649552   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.83 22 <nil> <nil>}
	I0203 12:29:32.649552   13136 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-749300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-749300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-749300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 12:29:32.803164   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 12:29:32.803164   13136 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0203 12:29:32.803230   13136 buildroot.go:174] setting up certificates
	I0203 12:29:32.803267   13136 provision.go:84] configureAuth start
	I0203 12:29:32.803267   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:34.754644   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:34.754644   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:34.754723   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:37.106839   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:29:37.106909   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:37.106983   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:39.083419   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:39.083419   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:39.084477   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:41.455774   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:29:41.455774   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:41.455774   13136 provision.go:143] copyHostCerts
	I0203 12:29:41.456252   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0203 12:29:41.456675   13136 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0203 12:29:41.456675   13136 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0203 12:29:41.457126   13136 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0203 12:29:41.458079   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0203 12:29:41.458239   13136 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0203 12:29:41.458312   13136 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0203 12:29:41.458649   13136 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0203 12:29:41.459471   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0203 12:29:41.459636   13136 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0203 12:29:41.459721   13136 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0203 12:29:41.460016   13136 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0203 12:29:41.461120   13136 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-749300-m02 san=[127.0.0.1 172.25.12.83 localhost minikube multinode-749300-m02]
	I0203 12:29:41.668515   13136 provision.go:177] copyRemoteCerts
	I0203 12:29:41.676417   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 12:29:41.676511   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:43.644792   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:43.644792   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:43.644792   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:46.016285   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:29:46.016960   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:46.016960   13136 sshutil.go:53] new ssh client: &{IP:172.25.12.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02\id_rsa Username:docker}
	I0203 12:29:46.133381   13136 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4567128s)
	I0203 12:29:46.133443   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0203 12:29:46.133869   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0203 12:29:46.182115   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0203 12:29:46.182538   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0203 12:29:46.227001   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0203 12:29:46.227001   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0203 12:29:46.271989   13136 provision.go:87] duration metric: took 13.4685705s to configureAuth
	I0203 12:29:46.271989   13136 buildroot.go:189] setting minikube options for container-runtime
	I0203 12:29:46.273002   13136 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:29:46.273002   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:48.307360   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:48.307437   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:48.307512   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:50.679048   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:29:50.679048   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:50.682452   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:29:50.683154   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.83 22 <nil> <nil>}
	I0203 12:29:50.683154   13136 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 12:29:50.822839   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0203 12:29:50.822839   13136 buildroot.go:70] root file system type: tmpfs
	I0203 12:29:50.822839   13136 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 12:29:50.822839   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:52.815521   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:52.815521   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:52.816022   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:55.160298   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:29:55.160697   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:55.165242   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:29:55.165965   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.83 22 <nil> <nil>}
	I0203 12:29:55.165965   13136 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.12.244"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 12:29:55.339851   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.12.244
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 12:29:55.339983   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:57.311598   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:57.311849   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:57.312103   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:59.665042   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:29:59.665042   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:59.669254   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:29:59.669977   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.83 22 <nil> <nil>}
	I0203 12:29:59.669977   13136 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 12:30:02.001431   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0203 12:30:02.001431   13136 machine.go:96] duration metric: took 42.6582333s to provisionDockerMachine
	I0203 12:30:02.001431   13136 start.go:293] postStartSetup for "multinode-749300-m02" (driver="hyperv")
	I0203 12:30:02.001431   13136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 12:30:02.010261   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 12:30:02.011020   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:30:04.023138   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:30:04.023870   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:04.023870   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:30:06.435420   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:30:06.435420   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:06.435420   13136 sshutil.go:53] new ssh client: &{IP:172.25.12.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02\id_rsa Username:docker}
	I0203 12:30:06.553597   13136 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5431968s)
	I0203 12:30:06.560819   13136 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 12:30:06.567642   13136 command_runner.go:130] > NAME=Buildroot
	I0203 12:30:06.567642   13136 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0203 12:30:06.567642   13136 command_runner.go:130] > ID=buildroot
	I0203 12:30:06.567642   13136 command_runner.go:130] > VERSION_ID=2023.02.9
	I0203 12:30:06.567642   13136 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0203 12:30:06.567642   13136 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 12:30:06.567642   13136 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0203 12:30:06.567642   13136 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0203 12:30:06.569339   13136 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> 54522.pem in /etc/ssl/certs
	I0203 12:30:06.569339   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /etc/ssl/certs/54522.pem
	I0203 12:30:06.577353   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 12:30:06.595518   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /etc/ssl/certs/54522.pem (1708 bytes)
	I0203 12:30:06.639303   13136 start.go:296] duration metric: took 4.6378206s for postStartSetup
	I0203 12:30:06.639391   13136 fix.go:56] duration metric: took 1m22.4662947s for fixHost
	I0203 12:30:06.639477   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:30:08.602935   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:30:08.603367   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:08.603470   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:30:10.924979   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:30:10.924979   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:10.929021   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:30:10.929083   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.83 22 <nil> <nil>}
	I0203 12:30:10.929083   13136 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0203 12:30:11.068118   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738585811.083018155
	
	I0203 12:30:11.068200   13136 fix.go:216] guest clock: 1738585811.083018155
	I0203 12:30:11.068200   13136 fix.go:229] Guest: 2025-02-03 12:30:11.083018155 +0000 UTC Remote: 2025-02-03 12:30:06.639391 +0000 UTC m=+283.133881701 (delta=4.443627155s)
	I0203 12:30:11.068274   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:30:13.010546   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:30:13.010546   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:13.011033   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:30:15.371836   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:30:15.371836   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:15.375529   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:30:15.376274   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.83 22 <nil> <nil>}
	I0203 12:30:15.376274   13136 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1738585811
	I0203 12:30:15.522276   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb  3 12:30:11 UTC 2025
	
	I0203 12:30:15.522381   13136 fix.go:236] clock set: Mon Feb  3 12:30:11 UTC 2025
	 (err=<nil>)
	I0203 12:30:15.522381   13136 start.go:83] releasing machines lock for "multinode-749300-m02", held for 1m31.349344s
	I0203 12:30:15.522566   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:30:17.465298   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:30:17.465922   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:17.465922   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:30:19.830610   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:30:19.830610   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:19.833741   13136 out.go:177] * Found network options:
	I0203 12:30:19.837143   13136 out.go:177]   - NO_PROXY=172.25.12.244
	W0203 12:30:19.839410   13136 proxy.go:119] fail to check proxy env: Error ip not in block
	I0203 12:30:19.842013   13136 out.go:177]   - NO_PROXY=172.25.12.244
	W0203 12:30:19.843510   13136 proxy.go:119] fail to check proxy env: Error ip not in block
	W0203 12:30:19.844509   13136 proxy.go:119] fail to check proxy env: Error ip not in block
	I0203 12:30:19.846634   13136 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0203 12:30:19.846634   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:30:19.853415   13136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0203 12:30:19.853415   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:30:21.870647   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:30:21.870647   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:21.870827   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:30:21.888685   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:30:21.888685   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:21.889685   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:30:24.276259   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:30:24.276259   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:24.277466   13136 sshutil.go:53] new ssh client: &{IP:172.25.12.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02\id_rsa Username:docker}
	I0203 12:30:24.299754   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:30:24.299754   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:24.299754   13136 sshutil.go:53] new ssh client: &{IP:172.25.12.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02\id_rsa Username:docker}
	I0203 12:30:24.376017   13136 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0203 12:30:24.376257   13136 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5227912s)
	W0203 12:30:24.376338   13136 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 12:30:24.383423   13136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 12:30:24.389052   13136 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0203 12:30:24.389052   13136 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.542367s)
	W0203 12:30:24.389505   13136 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0203 12:30:24.418504   13136 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0203 12:30:24.418504   13136 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0203 12:30:24.418504   13136 start.go:495] detecting cgroup driver to use...
	I0203 12:30:24.418504   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 12:30:24.451525   13136 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0203 12:30:24.459581   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0203 12:30:24.488217   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0203 12:30:24.508414   13136 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 12:30:24.516260   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0203 12:30:24.544176   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 12:30:24.571678   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 12:30:24.598339   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W0203 12:30:24.610327   13136 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0203 12:30:24.611150   13136 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0203 12:30:24.629388   13136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 12:30:24.660168   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 12:30:24.689929   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0203 12:30:24.718942   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0203 12:30:24.747741   13136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 12:30:24.765714   13136 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 12:30:24.766184   13136 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 12:30:24.774355   13136 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0203 12:30:24.810504   13136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 12:30:24.834121   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:30:25.010868   13136 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 12:30:25.047072   13136 start.go:495] detecting cgroup driver to use...
	I0203 12:30:25.055414   13136 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 12:30:25.076455   13136 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0203 12:30:25.076455   13136 command_runner.go:130] > [Unit]
	I0203 12:30:25.076455   13136 command_runner.go:130] > Description=Docker Application Container Engine
	I0203 12:30:25.076455   13136 command_runner.go:130] > Documentation=https://docs.docker.com
	I0203 12:30:25.076455   13136 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0203 12:30:25.077200   13136 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0203 12:30:25.077200   13136 command_runner.go:130] > StartLimitBurst=3
	I0203 12:30:25.077360   13136 command_runner.go:130] > StartLimitIntervalSec=60
	I0203 12:30:25.077360   13136 command_runner.go:130] > [Service]
	I0203 12:30:25.077360   13136 command_runner.go:130] > Type=notify
	I0203 12:30:25.077360   13136 command_runner.go:130] > Restart=on-failure
	I0203 12:30:25.077360   13136 command_runner.go:130] > Environment=NO_PROXY=172.25.12.244
	I0203 12:30:25.077360   13136 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0203 12:30:25.077360   13136 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0203 12:30:25.077360   13136 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0203 12:30:25.077360   13136 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0203 12:30:25.077360   13136 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0203 12:30:25.077360   13136 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0203 12:30:25.077360   13136 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0203 12:30:25.077360   13136 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0203 12:30:25.077360   13136 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0203 12:30:25.077360   13136 command_runner.go:130] > ExecStart=
	I0203 12:30:25.077360   13136 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0203 12:30:25.077360   13136 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0203 12:30:25.077360   13136 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0203 12:30:25.077360   13136 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0203 12:30:25.077360   13136 command_runner.go:130] > LimitNOFILE=infinity
	I0203 12:30:25.077360   13136 command_runner.go:130] > LimitNPROC=infinity
	I0203 12:30:25.077360   13136 command_runner.go:130] > LimitCORE=infinity
	I0203 12:30:25.077360   13136 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0203 12:30:25.077360   13136 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0203 12:30:25.077360   13136 command_runner.go:130] > TasksMax=infinity
	I0203 12:30:25.077360   13136 command_runner.go:130] > TimeoutStartSec=0
	I0203 12:30:25.077360   13136 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0203 12:30:25.077360   13136 command_runner.go:130] > Delegate=yes
	I0203 12:30:25.077360   13136 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0203 12:30:25.077360   13136 command_runner.go:130] > KillMode=process
	I0203 12:30:25.077360   13136 command_runner.go:130] > [Install]
	I0203 12:30:25.077360   13136 command_runner.go:130] > WantedBy=multi-user.target
	I0203 12:30:25.086445   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 12:30:25.116174   13136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 12:30:25.157577   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 12:30:25.192208   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 12:30:25.223508   13136 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0203 12:30:25.283980   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 12:30:25.308451   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 12:30:25.344031   13136 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0203 12:30:25.352532   13136 ssh_runner.go:195] Run: which cri-dockerd
	I0203 12:30:25.358930   13136 command_runner.go:130] > /usr/bin/cri-dockerd
	I0203 12:30:25.367553   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0203 12:30:25.384841   13136 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0203 12:30:25.426734   13136 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 12:30:25.622634   13136 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 12:30:25.795117   13136 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 12:30:25.795117   13136 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0203 12:30:25.839323   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:30:26.017146   13136 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 12:30:28.673763   13136 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6565878s)
	I0203 12:30:28.680764   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0203 12:30:28.712976   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 12:30:28.743289   13136 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0203 12:30:28.928229   13136 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 12:30:29.117571   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:30:29.304467   13136 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0203 12:30:29.341881   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 12:30:29.371943   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:30:29.548852   13136 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0203 12:30:29.651781   13136 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0203 12:30:29.659524   13136 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0203 12:30:29.667791   13136 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0203 12:30:29.667791   13136 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0203 12:30:29.667791   13136 command_runner.go:130] > Device: 0,22	Inode: 859         Links: 1
	I0203 12:30:29.667791   13136 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0203 12:30:29.667791   13136 command_runner.go:130] > Access: 2025-02-03 12:30:29.589054259 +0000
	I0203 12:30:29.667791   13136 command_runner.go:130] > Modify: 2025-02-03 12:30:29.589054259 +0000
	I0203 12:30:29.667919   13136 command_runner.go:130] > Change: 2025-02-03 12:30:29.593054266 +0000
	I0203 12:30:29.667919   13136 command_runner.go:130] >  Birth: -
	I0203 12:30:29.668024   13136 start.go:563] Will wait 60s for crictl version
	I0203 12:30:29.675669   13136 ssh_runner.go:195] Run: which crictl
	I0203 12:30:29.681717   13136 command_runner.go:130] > /usr/bin/crictl
	I0203 12:30:29.689217   13136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 12:30:29.739657   13136 command_runner.go:130] > Version:  0.1.0
	I0203 12:30:29.739657   13136 command_runner.go:130] > RuntimeName:  docker
	I0203 12:30:29.739657   13136 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0203 12:30:29.739657   13136 command_runner.go:130] > RuntimeApiVersion:  v1
	I0203 12:30:29.739657   13136 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0203 12:30:29.746863   13136 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 12:30:29.784543   13136 command_runner.go:130] > 27.4.0
	I0203 12:30:29.791537   13136 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 12:30:29.824172   13136 command_runner.go:130] > 27.4.0
	I0203 12:30:29.828197   13136 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0203 12:30:29.830237   13136 out.go:177]   - env NO_PROXY=172.25.12.244
	I0203 12:30:29.833206   13136 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0203 12:30:29.837211   13136 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0203 12:30:29.837211   13136 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0203 12:30:29.837211   13136 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0203 12:30:29.837211   13136 ip.go:211] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:37:32:ac Flags:up|broadcast|multicast|running}
	I0203 12:30:29.840206   13136 ip.go:214] interface addr: fe80::c77d:5c4b:3bd9:9577/64
	I0203 12:30:29.840206   13136 ip.go:214] interface addr: 172.25.0.1/20
	I0203 12:30:29.848210   13136 ssh_runner.go:195] Run: grep 172.25.0.1	host.minikube.internal$ /etc/hosts
	I0203 12:30:29.855196   13136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 12:30:29.877543   13136 mustload.go:65] Loading cluster: multinode-749300
	I0203 12:30:29.877707   13136 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:30:29.878794   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:30:31.834191   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:30:31.834191   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:31.834325   13136 host.go:66] Checking if "multinode-749300" exists ...
	I0203 12:30:31.834843   13136 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300 for IP: 172.25.12.83
	I0203 12:30:31.834843   13136 certs.go:194] generating shared ca certs ...
	I0203 12:30:31.834843   13136 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:30:31.835379   13136 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0203 12:30:31.835668   13136 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0203 12:30:31.835896   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0203 12:30:31.835948   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0203 12:30:31.835948   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0203 12:30:31.835948   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0203 12:30:31.836482   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem (1338 bytes)
	W0203 12:30:31.836853   13136 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452_empty.pem, impossibly tiny 0 bytes
	I0203 12:30:31.836930   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0203 12:30:31.837104   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0203 12:30:31.837357   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0203 12:30:31.837556   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0203 12:30:31.837862   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem (1708 bytes)
	I0203 12:30:31.838018   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:30:31.838184   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem -> /usr/share/ca-certificates/5452.pem
	I0203 12:30:31.838271   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /usr/share/ca-certificates/54522.pem
	I0203 12:30:31.838469   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 12:30:31.884367   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0203 12:30:31.927632   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 12:30:31.971236   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0203 12:30:32.015509   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 12:30:32.059445   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem --> /usr/share/ca-certificates/5452.pem (1338 bytes)
	I0203 12:30:32.103160   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /usr/share/ca-certificates/54522.pem (1708 bytes)
	I0203 12:30:32.156168   13136 ssh_runner.go:195] Run: openssl version
	I0203 12:30:32.164997   13136 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0203 12:30:32.173999   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 12:30:32.201808   13136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:30:32.209041   13136 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb  3 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:30:32.209041   13136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:30:32.217562   13136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:30:32.228961   13136 command_runner.go:130] > b5213941
	I0203 12:30:32.238593   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 12:30:32.264594   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5452.pem && ln -fs /usr/share/ca-certificates/5452.pem /etc/ssl/certs/5452.pem"
	I0203 12:30:32.292026   13136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5452.pem
	I0203 12:30:32.298814   13136 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb  3 10:45 /usr/share/ca-certificates/5452.pem
	I0203 12:30:32.299288   13136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:45 /usr/share/ca-certificates/5452.pem
	I0203 12:30:32.307057   13136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5452.pem
	I0203 12:30:32.315958   13136 command_runner.go:130] > 51391683
	I0203 12:30:32.323055   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5452.pem /etc/ssl/certs/51391683.0"
	I0203 12:30:32.351057   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54522.pem && ln -fs /usr/share/ca-certificates/54522.pem /etc/ssl/certs/54522.pem"
	I0203 12:30:32.380654   13136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54522.pem
	I0203 12:30:32.387930   13136 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb  3 10:45 /usr/share/ca-certificates/54522.pem
	I0203 12:30:32.388042   13136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:45 /usr/share/ca-certificates/54522.pem
	I0203 12:30:32.395870   13136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54522.pem
	I0203 12:30:32.404292   13136 command_runner.go:130] > 3ec20f2e
	I0203 12:30:32.412732   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/54522.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 12:30:32.440159   13136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 12:30:32.446772   13136 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0203 12:30:32.446772   13136 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0203 12:30:32.446772   13136 kubeadm.go:934] updating node {m02 172.25.12.83 8443 v1.32.1 docker false true} ...
	I0203 12:30:32.446772   13136 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-749300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.12.83
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:multinode-749300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0203 12:30:32.454162   13136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0203 12:30:32.473643   13136 command_runner.go:130] > kubeadm
	I0203 12:30:32.473695   13136 command_runner.go:130] > kubectl
	I0203 12:30:32.473695   13136 command_runner.go:130] > kubelet
	I0203 12:30:32.473729   13136 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 12:30:32.481580   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0203 12:30:32.501567   13136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0203 12:30:32.531463   13136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 12:30:32.570564   13136 ssh_runner.go:195] Run: grep 172.25.12.244	control-plane.minikube.internal$ /etc/hosts
	I0203 12:30:32.577410   13136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.12.244	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 12:30:32.606757   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:30:32.793095   13136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 12:30:32.822245   13136 host.go:66] Checking if "multinode-749300" exists ...
	I0203 12:30:32.822983   13136 start.go:317] joinCluster: &{Name:multinode-749300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-749300 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.12.244 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.12.83 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.0.54 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-
provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 12:30:32.823146   13136 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.25.12.83 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0203 12:30:32.823192   13136 host.go:66] Checking if "multinode-749300-m02" exists ...
	I0203 12:30:32.823667   13136 mustload.go:65] Loading cluster: multinode-749300
	I0203 12:30:32.824088   13136 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:30:32.824567   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:30:34.845527   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:30:34.845527   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:34.846527   13136 host.go:66] Checking if "multinode-749300" exists ...
	I0203 12:30:34.846677   13136 api_server.go:166] Checking apiserver status ...
	I0203 12:30:34.855152   13136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 12:30:34.855214   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:30:36.862077   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:30:36.862077   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:36.862629   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:30:39.202108   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:30:39.202895   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:39.202895   13136 sshutil.go:53] new ssh client: &{IP:172.25.12.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\id_rsa Username:docker}
	I0203 12:30:39.318448   13136 command_runner.go:130] > 1987
	I0203 12:30:39.318448   13136 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.4631838s)
	I0203 12:30:39.326382   13136 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1987/cgroup
	W0203 12:30:39.346014   13136 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1987/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0203 12:30:39.354732   13136 ssh_runner.go:195] Run: ls
	I0203 12:30:39.362437   13136 api_server.go:253] Checking apiserver healthz at https://172.25.12.244:8443/healthz ...
	I0203 12:30:39.373178   13136 api_server.go:279] https://172.25.12.244:8443/healthz returned 200:
	ok
	I0203 12:30:39.380420   13136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl drain multinode-749300-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0203 12:30:39.519234   13136 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-dc9wq, kube-system/kube-proxy-ggnq7
	I0203 12:30:42.555132   13136 command_runner.go:130] > node/multinode-749300-m02 cordoned
	I0203 12:30:42.555272   13136 command_runner.go:130] > pod "busybox-58667487b6-c66bf" has DeletionTimestamp older than 1 seconds, skipping
	I0203 12:30:42.555272   13136 command_runner.go:130] > node/multinode-749300-m02 drained
	I0203 12:30:42.555272   13136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl drain multinode-749300-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.1748159s)
	I0203 12:30:42.555272   13136 node.go:128] successfully drained node "multinode-749300-m02"
	I0203 12:30:42.555399   13136 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0203 12:30:42.555491   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:30:44.522164   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:30:44.522164   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:44.522164   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:30:46.967766   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:30:46.967821   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:46.968184   13136 sshutil.go:53] new ssh client: &{IP:172.25.12.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02\id_rsa Username:docker}
	I0203 12:30:47.402992   13136 command_runner.go:130] ! W0203 12:30:47.419505    1672 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0203 12:30:47.606618   13136 command_runner.go:130] ! W0203 12:30:47.623353    1672 cleanupnode.go:105] [reset] Failed to remove containers: failed to stop running pod fbb29dd3e5ebc489c42552b25f24ca2b8d6fb85e374593277c866a7c497f491e: rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod "busybox-58667487b6-c66bf_default" network: cni config uninitialized
	I0203 12:30:47.628787   13136 command_runner.go:130] > [preflight] Running pre-flight checks
	I0203 12:30:47.628845   13136 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0203 12:30:47.628892   13136 command_runner.go:130] > [reset] Stopping the kubelet service
	I0203 12:30:47.628892   13136 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0203 12:30:47.628929   13136 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0203 12:30:47.628971   13136 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0203 12:30:47.628997   13136 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0203 12:30:47.628997   13136 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0203 12:30:47.628997   13136 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0203 12:30:47.628997   13136 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0203 12:30:47.628997   13136 command_runner.go:130] > to reset your system's IPVS tables.
	I0203 12:30:47.628997   13136 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0203 12:30:47.628997   13136 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0203 12:30:47.628997   13136 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (5.0734887s)
	I0203 12:30:47.628997   13136 node.go:155] successfully reset node "multinode-749300-m02"
	I0203 12:30:47.629910   13136 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 12:30:47.630988   13136 kapi.go:59] client config for multinode-749300: &rest.Config{Host:"https://172.25.12.244:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-749300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-749300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x219e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 12:30:47.632226   13136 cert_rotation.go:140] Starting client certificate rotation controller
	I0203 12:30:47.632226   13136 request.go:1351] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0203 12:30:47.632226   13136 round_trippers.go:463] DELETE https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:47.632226   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:47.632226   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:47.632226   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:47.632226   13136 round_trippers.go:473]     Content-Type: application/json
	I0203 12:30:47.651082   13136 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0203 12:30:47.651082   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:47.651082   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:47.651082   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:47.651082   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:47.651082   13136 round_trippers.go:580]     Content-Length: 171
	I0203 12:30:47.651082   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:47 GMT
	I0203 12:30:47.651082   13136 round_trippers.go:580]     Audit-Id: 2ddd1a96-a225-4a38-aaa1-a67411022e02
	I0203 12:30:47.651082   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:47.651082   13136 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-749300-m02","kind":"nodes","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64"}}
	I0203 12:30:47.651082   13136 node.go:180] successfully deleted node "multinode-749300-m02"
	I0203 12:30:47.652083   13136 start.go:334] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.25.12.83 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0203 12:30:47.652083   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0203 12:30:47.652083   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:30:49.650917   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:30:49.650917   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:49.650917   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:30:52.005941   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:30:52.005941   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:52.005941   13136 sshutil.go:53] new ssh client: &{IP:172.25.12.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\id_rsa Username:docker}
	I0203 12:30:52.436057   13136 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token f7bvoc.9tp7leab6i1ufi1o --discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce 
	I0203 12:30:52.436907   13136 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.7847697s)
	I0203 12:30:52.436975   13136 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.25.12.83 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0203 12:30:52.436975   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token f7bvoc.9tp7leab6i1ufi1o --discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-749300-m02"
	I0203 12:30:52.609431   13136 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 12:30:53.972004   13136 command_runner.go:130] > [preflight] Running pre-flight checks
	I0203 12:30:53.972903   13136 command_runner.go:130] > [preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
	I0203 12:30:53.972903   13136 command_runner.go:130] > [preflight] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
	I0203 12:30:53.972903   13136 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 12:30:53.972903   13136 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 12:30:53.972903   13136 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0203 12:30:53.972997   13136 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0203 12:30:53.973081   13136 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 503.565131ms
	I0203 12:30:53.973140   13136 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0203 12:30:53.973140   13136 command_runner.go:130] > This node has joined the cluster:
	I0203 12:30:53.973226   13136 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0203 12:30:53.973226   13136 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0203 12:30:53.973226   13136 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0203 12:30:53.973226   13136 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token f7bvoc.9tp7leab6i1ufi1o --discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-749300-m02": (1.5362337s)
	I0203 12:30:53.973338   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0203 12:30:54.179128   13136 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0203 12:30:54.368281   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-749300-m02 minikube.k8s.io/updated_at=2025_02_03T12_30_54_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d minikube.k8s.io/name=multinode-749300 minikube.k8s.io/primary=false
	I0203 12:30:54.503657   13136 command_runner.go:130] > node/multinode-749300-m02 labeled
	I0203 12:30:54.503657   13136 start.go:319] duration metric: took 21.6804317s to joinCluster
	I0203 12:30:54.503657   13136 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.25.12.83 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0203 12:30:54.504507   13136 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:30:54.506479   13136 out.go:177] * Verifying Kubernetes components...
	I0203 12:30:54.517803   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:30:54.703763   13136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 12:30:54.736542   13136 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 12:30:54.737098   13136 kapi.go:59] client config for multinode-749300: &rest.Config{Host:"https://172.25.12.244:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-749300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-749300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x219e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 12:30:54.737638   13136 node_ready.go:35] waiting up to 6m0s for node "multinode-749300-m02" to be "Ready" ...
	I0203 12:30:54.738053   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:54.738053   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:54.738053   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:54.738053   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:54.741809   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:30:54.741809   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:54.741809   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:54.741809   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:54.741809   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:54.741809   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:54 GMT
	I0203 12:30:54.741809   13136 round_trippers.go:580]     Audit-Id: 0e27248a-ca01-4565-b1d7-b55afa090727
	I0203 12:30:54.741809   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:54.741809   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2115","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0203 12:30:55.238679   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:55.238679   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:55.238679   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:55.238679   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:55.247915   13136 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0203 12:30:55.247915   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:55.247915   13136 round_trippers.go:580]     Audit-Id: 542b25a7-d059-489d-b023-1778b403c416
	I0203 12:30:55.247915   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:55.247915   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:55.247915   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:55.247915   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:55.247915   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:55 GMT
	I0203 12:30:55.247915   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2115","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0203 12:30:55.738020   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:55.738537   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:55.738537   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:55.738537   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:55.744601   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:30:55.744601   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:55.744601   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:55.744601   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:55.744601   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:55 GMT
	I0203 12:30:55.744601   13136 round_trippers.go:580]     Audit-Id: aeab1258-d0d0-4d3a-82d4-d64a6fda3876
	I0203 12:30:55.744601   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:55.744601   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:55.744601   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2115","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0203 12:30:56.238110   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:56.238110   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:56.238110   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:56.238110   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:56.241922   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:30:56.241922   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:56.241922   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:56 GMT
	I0203 12:30:56.241922   13136 round_trippers.go:580]     Audit-Id: cd6a60e2-bd4c-46ed-b8e2-a088357667b5
	I0203 12:30:56.241922   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:56.241922   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:56.241922   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:56.241922   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:56.242459   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2115","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0203 12:30:56.737953   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:56.738347   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:56.738347   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:56.738347   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:56.741047   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:30:56.742046   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:56.742046   13136 round_trippers.go:580]     Audit-Id: c4cb4f5c-2da5-4d53-89ab-8336bddbabb8
	I0203 12:30:56.742046   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:56.742046   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:56.742046   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:56.742046   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:56.742046   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:56 GMT
	I0203 12:30:56.742046   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2115","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0203 12:30:56.742046   13136 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:30:57.238071   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:57.238071   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:57.238071   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:57.238071   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:57.242848   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:30:57.242944   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:57.242944   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:57.242944   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:57.242944   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:57 GMT
	I0203 12:30:57.242944   13136 round_trippers.go:580]     Audit-Id: b847ac4c-ca87-4e4e-906c-af762bb9a7b2
	I0203 12:30:57.242944   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:57.242944   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:57.243140   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2115","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0203 12:30:57.738158   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:57.738158   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:57.738158   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:57.738158   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:57.742170   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:30:57.742241   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:57.742241   13136 round_trippers.go:580]     Audit-Id: 7f33799f-30c7-4d16-b404-1cb2056dd0b2
	I0203 12:30:57.742241   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:57.742241   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:57.742241   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:57.742241   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:57.742241   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:57 GMT
	I0203 12:30:57.742533   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2115","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0203 12:30:58.238985   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:58.238985   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:58.238985   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:58.238985   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:58.243167   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:30:58.243167   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:58.243167   13136 round_trippers.go:580]     Audit-Id: 22f1ce72-4616-426e-a5af-c76ddc116d03
	I0203 12:30:58.243167   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:58.243167   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:58.243167   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:58.243167   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:58.243167   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:58 GMT
	I0203 12:30:58.243488   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2115","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0203 12:30:58.738536   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:58.738536   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:58.738536   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:58.738536   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:58.742800   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:30:58.742800   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:58.742908   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:58.742908   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:58.742908   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:58.742908   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:58.742908   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:58 GMT
	I0203 12:30:58.742908   13136 round_trippers.go:580]     Audit-Id: 7ccf0d0a-c805-47c1-9249-d6fba4c2294e
	I0203 12:30:58.743049   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2140","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0203 12:30:58.743446   13136 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:30:59.237850   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:59.237850   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:59.237850   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:59.237850   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:59.242646   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:30:59.242646   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:59.242646   13136 round_trippers.go:580]     Audit-Id: 0f35e76d-276a-45e1-8b19-21667d4518a4
	I0203 12:30:59.242646   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:59.242646   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:59.242728   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:59.242728   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:59.242728   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:59 GMT
	I0203 12:30:59.242886   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2140","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0203 12:30:59.738416   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:59.738416   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:59.738416   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:59.738416   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:59.742216   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:30:59.742216   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:59.742216   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:59.742216   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:59.742216   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:59 GMT
	I0203 12:30:59.742216   13136 round_trippers.go:580]     Audit-Id: 328dc19a-de53-4f49-8f03-a7b89b9b7994
	I0203 12:30:59.742216   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:59.742216   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:59.742423   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2140","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0203 12:31:00.239542   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:00.239542   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:00.239542   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:00.239542   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:00.243306   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:31:00.243306   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:00.243306   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:00.243306   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:00.243306   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:00.243306   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:00.243306   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:00 GMT
	I0203 12:31:00.243306   13136 round_trippers.go:580]     Audit-Id: 14d46575-ebb9-4f53-a377-369436f2efed
	I0203 12:31:00.244318   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2140","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0203 12:31:00.737766   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:00.737766   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:00.737766   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:00.737766   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:00.741935   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:00.741935   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:00.742010   13136 round_trippers.go:580]     Audit-Id: bba2832a-5795-46bb-b517-1b48a45f26ea
	I0203 12:31:00.742010   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:00.742010   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:00.742010   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:00.742010   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:00.742010   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:00 GMT
	I0203 12:31:00.742191   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2140","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0203 12:31:01.238469   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:01.238469   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:01.238469   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:01.238469   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:01.242646   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:01.242775   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:01.242775   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:01.242775   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:01.242775   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:01 GMT
	I0203 12:31:01.242775   13136 round_trippers.go:580]     Audit-Id: aeeb4e6d-debf-4189-aec9-586a8ee73a54
	I0203 12:31:01.242775   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:01.242775   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:01.242893   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2140","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0203 12:31:01.243379   13136 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:31:01.738968   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:01.738968   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:01.738968   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:01.738968   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:01.743020   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:31:01.743020   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:01.743020   13136 round_trippers.go:580]     Audit-Id: c4a755bc-a420-438c-90ba-a828088500ea
	I0203 12:31:01.743020   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:01.743020   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:01.743020   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:01.743020   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:01.743020   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:01 GMT
	I0203 12:31:01.743282   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2140","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0203 12:31:02.238051   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:02.238051   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:02.238051   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:02.238051   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:02.242323   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:02.242420   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:02.242420   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:02 GMT
	I0203 12:31:02.242420   13136 round_trippers.go:580]     Audit-Id: 625f6715-d8fe-4660-826d-156e44619097
	I0203 12:31:02.242420   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:02.242420   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:02.242420   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:02.242420   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:02.243060   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2140","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0203 12:31:02.739507   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:02.739578   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:02.739578   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:02.739578   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:02.743073   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:31:02.743073   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:02.743073   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:02.743073   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:02.743171   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:02 GMT
	I0203 12:31:02.743171   13136 round_trippers.go:580]     Audit-Id: a20a5a74-bc6c-40a2-877c-3b10174146ad
	I0203 12:31:02.743171   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:02.743171   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:02.743629   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2140","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0203 12:31:03.238575   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:03.238575   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:03.238575   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:03.238575   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:03.242978   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:03.242978   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:03.243108   13136 round_trippers.go:580]     Audit-Id: a2ae7118-3676-43c4-a7d7-31f7e1042bea
	I0203 12:31:03.243108   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:03.243108   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:03.243108   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:03.243108   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:03.243108   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:03 GMT
	I0203 12:31:03.243184   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2140","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0203 12:31:03.738377   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:03.738377   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:03.738377   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:03.738377   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:03.742819   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:03.742819   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:03.742819   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:03.742936   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:03 GMT
	I0203 12:31:03.742936   13136 round_trippers.go:580]     Audit-Id: 1a6ddf04-1e71-4498-92ea-d1fdf3d4ab86
	I0203 12:31:03.742936   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:03.742936   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:03.742936   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:03.743073   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2140","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0203 12:31:03.743522   13136 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:31:04.238303   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:04.238303   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:04.238303   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:04.238303   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:04.242898   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:04.242898   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:04.243009   13136 round_trippers.go:580]     Audit-Id: ea6c1ddc-fc65-4688-9b3b-7a0dc305b988
	I0203 12:31:04.243009   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:04.243009   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:04.243009   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:04.243009   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:04.243009   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:04 GMT
	I0203 12:31:04.243113   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:04.738517   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:04.738517   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:04.738517   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:04.738517   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:04.742786   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:04.742786   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:04.742786   13136 round_trippers.go:580]     Audit-Id: 7b1c16a2-00aa-42e7-842a-a46b86b2b831
	I0203 12:31:04.742786   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:04.742786   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:04.742786   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:04.742786   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:04.742786   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:04 GMT
	I0203 12:31:04.743238   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:05.238753   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:05.238753   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:05.238753   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:05.238753   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:05.242401   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:31:05.242401   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:05.242401   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:05.242483   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:05.242483   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:05.242483   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:05 GMT
	I0203 12:31:05.242483   13136 round_trippers.go:580]     Audit-Id: a8c8aa18-4b21-4787-8d24-16b8423c93a2
	I0203 12:31:05.242483   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:05.242940   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:05.738285   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:05.738285   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:05.738285   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:05.738285   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:05.743325   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:31:05.743390   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:05.743390   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:05.743390   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:05.743390   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:05.743390   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:05.743448   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:05 GMT
	I0203 12:31:05.743448   13136 round_trippers.go:580]     Audit-Id: a24c6c6f-149d-4a5a-93aa-edd6fa1fc3c5
	I0203 12:31:05.744030   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:05.744341   13136 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:31:06.238991   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:06.239063   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:06.239063   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:06.239063   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:06.242569   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:31:06.242569   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:06.242569   13136 round_trippers.go:580]     Audit-Id: 77039a85-63e1-48ad-9427-b515055869b2
	I0203 12:31:06.242569   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:06.242569   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:06.242569   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:06.242569   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:06.242569   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:06 GMT
	I0203 12:31:06.243096   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:06.738850   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:06.739590   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:06.739590   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:06.739590   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:06.745861   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:31:06.745861   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:06.745861   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:06.745861   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:06.745861   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:06.745861   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:06 GMT
	I0203 12:31:06.745861   13136 round_trippers.go:580]     Audit-Id: 341bb909-e1f9-4ef4-a7f0-febfe84200c8
	I0203 12:31:06.745861   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:06.746612   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:07.238888   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:07.238888   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:07.238888   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:07.238888   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:07.242127   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:31:07.242563   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:07.242563   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:07.242563   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:07.242626   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:07.242626   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:07 GMT
	I0203 12:31:07.242626   13136 round_trippers.go:580]     Audit-Id: 81229084-47d2-4190-a5b0-1f92ac79a21d
	I0203 12:31:07.242626   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:07.242947   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:07.739132   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:07.739309   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:07.739309   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:07.739309   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:07.742965   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:31:07.743031   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:07.743031   13136 round_trippers.go:580]     Audit-Id: b8633751-d755-4a9d-9291-f992357d1099
	I0203 12:31:07.743031   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:07.743031   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:07.743031   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:07.743031   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:07.743105   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:07 GMT
	I0203 12:31:07.743245   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:08.238644   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:08.238644   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:08.238644   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:08.238644   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:08.243447   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:08.243536   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:08.243536   13136 round_trippers.go:580]     Audit-Id: dfea063a-de99-4654-9b92-abff80de0f2a
	I0203 12:31:08.243536   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:08.243571   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:08.243571   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:08.243571   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:08.243571   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:08 GMT
	I0203 12:31:08.243795   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:08.243795   13136 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:31:08.738876   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:08.738876   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:08.738876   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:08.738876   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:08.743015   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:08.743015   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:08.743015   13136 round_trippers.go:580]     Audit-Id: 1df72dec-4aa8-4b03-bc00-00306ed37560
	I0203 12:31:08.743015   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:08.743015   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:08.743015   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:08.743015   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:08.743015   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:08 GMT
	I0203 12:31:08.743015   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:09.237985   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:09.237985   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:09.237985   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:09.237985   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:09.242413   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:09.242492   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:09.242492   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:09.242492   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:09 GMT
	I0203 12:31:09.242492   13136 round_trippers.go:580]     Audit-Id: da6cd810-f2ea-4514-93db-5e9eca14a8b2
	I0203 12:31:09.242492   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:09.242492   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:09.242492   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:09.242732   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:09.738187   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:09.738187   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:09.738187   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:09.738187   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:09.742607   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:09.742607   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:09.742607   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:09.742607   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:09.742607   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:09.742607   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:09.742607   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:09 GMT
	I0203 12:31:09.742607   13136 round_trippers.go:580]     Audit-Id: 05661713-4387-4f75-b655-51606d72ace1
	I0203 12:31:09.742833   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:10.238174   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:10.238174   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.238174   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.238174   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.243384   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:31:10.243384   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.243384   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.243384   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.243384   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.243384   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.243384   13136 round_trippers.go:580]     Audit-Id: 15498d28-4edc-4ca7-a4f5-5d46a0b5623d
	I0203 12:31:10.243384   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.243384   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2158","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3932 chars]
	I0203 12:31:10.244043   13136 node_ready.go:49] node "multinode-749300-m02" has status "Ready":"True"
	I0203 12:31:10.244125   13136 node_ready.go:38] duration metric: took 15.506313s for node "multinode-749300-m02" to be "Ready" ...
	I0203 12:31:10.244125   13136 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 12:31:10.244290   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods
	I0203 12:31:10.244290   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.244290   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.244290   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.250326   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:31:10.250326   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.250326   13136 round_trippers.go:580]     Audit-Id: 1bf4f542-2901-4c52-944a-99dc63b4edc8
	I0203 12:31:10.250326   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.250326   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.250326   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.250326   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.250326   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.251694   13136 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2160"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1962","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89844 chars]
	I0203 12:31:10.255659   13136 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:10.255659   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:31:10.255659   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.255659   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.255659   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.259217   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:31:10.259217   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.259217   13136 round_trippers.go:580]     Audit-Id: dbcecdc4-c942-46ac-b731-cb5635ac0341
	I0203 12:31:10.259217   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.259217   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.259217   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.259217   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.259217   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.259217   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1962","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7044 chars]
	I0203 12:31:10.260186   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:31:10.260186   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.260186   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.260186   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.263270   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:31:10.263312   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.263312   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.263312   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.263312   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.263312   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.263312   13136 round_trippers.go:580]     Audit-Id: e79e89b8-9466-4eac-bb0d-c463e202cdf0
	I0203 12:31:10.263312   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.263443   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:31:10.263874   13136 pod_ready.go:93] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"True"
	I0203 12:31:10.263940   13136 pod_ready.go:82] duration metric: took 8.2153ms for pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:10.263940   13136 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:10.263999   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-749300
	I0203 12:31:10.263999   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.263999   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.263999   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.266747   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:31:10.266747   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.266747   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.266747   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.266747   13136 round_trippers.go:580]     Audit-Id: b959a1c7-63ec-4dcf-98fb-cc495338b276
	I0203 12:31:10.266747   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.266747   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.266747   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.266747   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-749300","namespace":"kube-system","uid":"a956084b-f454-4ef5-8fed-7c189cb74ab0","resourceVersion":"1876","creationTimestamp":"2025-02-03T12:27:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.12.244:2379","kubernetes.io/config.hash":"f85eb916773a482447e41aa40aaff233","kubernetes.io/config.mirror":"f85eb916773a482447e41aa40aaff233","kubernetes.io/config.seen":"2025-02-03T12:27:19.750780815Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:27:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6606 chars]
	I0203 12:31:10.267425   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:31:10.267479   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.267479   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.267479   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.269709   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:31:10.269709   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.269709   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.269709   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.269709   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.269709   13136 round_trippers.go:580]     Audit-Id: a19b8d33-3e44-495c-8f95-8e561bb5f764
	I0203 12:31:10.269709   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.269709   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.269709   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:31:10.269709   13136 pod_ready.go:93] pod "etcd-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:31:10.269709   13136 pod_ready.go:82] duration metric: took 5.7694ms for pod "etcd-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:10.269709   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:10.270710   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-749300
	I0203 12:31:10.270710   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.270710   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.270710   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.272538   13136 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0203 12:31:10.272538   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.272538   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.272538   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.272538   13136 round_trippers.go:580]     Audit-Id: 557ab5cc-2b1f-4331-b1cc-3281c6a147ac
	I0203 12:31:10.272538   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.272538   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.272538   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.273552   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-749300","namespace":"kube-system","uid":"72513861-07f4-4533-8f55-8b3cce215b4c","resourceVersion":"1856","creationTimestamp":"2025-02-03T12:27:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.12.244:8443","kubernetes.io/config.hash":"20275825c8d44051c01f8d920b297acd","kubernetes.io/config.mirror":"20275825c8d44051c01f8d920b297acd","kubernetes.io/config.seen":"2025-02-03T12:27:19.750137111Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:27:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8039 chars]
	I0203 12:31:10.274154   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:31:10.274154   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.274212   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.274212   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.275926   13136 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0203 12:31:10.275926   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.275926   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.275926   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.275926   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.275926   13136 round_trippers.go:580]     Audit-Id: b5789de2-b973-4bbd-b299-0a56a35dfbaf
	I0203 12:31:10.276721   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.276721   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.276995   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:31:10.277395   13136 pod_ready.go:93] pod "kube-apiserver-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:31:10.277395   13136 pod_ready.go:82] duration metric: took 6.685ms for pod "kube-apiserver-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:10.277395   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:10.277579   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-749300
	I0203 12:31:10.277613   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.277652   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.277652   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.279895   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:31:10.279895   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.279895   13136 round_trippers.go:580]     Audit-Id: 0bac0957-4903-425c-914b-0c22e8499ab8
	I0203 12:31:10.279895   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.279895   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.279895   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.279895   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.279895   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.279895   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-749300","namespace":"kube-system","uid":"63c0818c-a0e6-40d1-b0c4-1cd633c91afb","resourceVersion":"1874","creationTimestamp":"2025-02-03T12:04:55Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c25845f184856fc216b76acafcf34ee9","kubernetes.io/config.mirror":"c25845f184856fc216b76acafcf34ee9","kubernetes.io/config.seen":"2025-02-03T12:04:55.455020645Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:04:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0203 12:31:10.279895   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:31:10.279895   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.279895   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.279895   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.286150   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:31:10.286150   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.286235   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.286235   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.286235   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.286235   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.286269   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.286269   13136 round_trippers.go:580]     Audit-Id: 073ccdab-2566-412f-915e-b462c49a331a
	I0203 12:31:10.286269   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:31:10.286269   13136 pod_ready.go:93] pod "kube-controller-manager-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:31:10.286269   13136 pod_ready.go:82] duration metric: took 8.8738ms for pod "kube-controller-manager-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:10.286269   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9g92t" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:10.439730   13136 request.go:632] Waited for 153.4595ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g92t
	I0203 12:31:10.439730   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g92t
	I0203 12:31:10.439730   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.439730   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.439730   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.445099   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:31:10.445165   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.445165   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.445165   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.445165   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.445224   13136 round_trippers.go:580]     Audit-Id: 0e3d0907-e9a1-40aa-97f4-e616430abb2f
	I0203 12:31:10.445240   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.445240   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.445920   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9g92t","generateName":"kube-proxy-","namespace":"kube-system","uid":"1709b874-4fee-41f5-8d30-24912b2fa725","resourceVersion":"1844","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"04519c88-48ba-439f-bd57-a9c8b286d988","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04519c88-48ba-439f-bd57-a9c8b286d988\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6400 chars]
	I0203 12:31:10.638447   13136 request.go:632] Waited for 191.7891ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:31:10.638640   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:31:10.638640   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.638640   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.638640   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.642490   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:31:10.642490   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.642490   13136 round_trippers.go:580]     Audit-Id: c6acb06d-a44d-49b2-a512-bfd16ce4c115
	I0203 12:31:10.642490   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.642490   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.642490   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.642490   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.642490   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.642490   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:31:10.643228   13136 pod_ready.go:93] pod "kube-proxy-9g92t" in "kube-system" namespace has status "Ready":"True"
	I0203 12:31:10.643325   13136 pod_ready.go:82] duration metric: took 357.0518ms for pod "kube-proxy-9g92t" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:10.643325   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ggnq7" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:10.838331   13136 request.go:632] Waited for 194.8983ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggnq7
	I0203 12:31:10.838331   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggnq7
	I0203 12:31:10.838331   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.838331   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.838331   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.843483   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:31:10.843592   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.843592   13136 round_trippers.go:580]     Audit-Id: 6385e06e-a930-4a15-9a26-edfb13aa566d
	I0203 12:31:10.843592   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.843592   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.843592   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.843592   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.843592   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.843910   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ggnq7","generateName":"kube-proxy-","namespace":"kube-system","uid":"63bc9e77-90e3-40c5-9b49-e95d2bfd7426","resourceVersion":"2129","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"04519c88-48ba-439f-bd57-a9c8b286d988","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04519c88-48ba-439f-bd57-a9c8b286d988\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0203 12:31:11.038388   13136 request.go:632] Waited for 193.7269ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:11.038388   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:11.038388   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:11.038388   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:11.038388   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:11.043185   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:11.043294   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:11.043294   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:11 GMT
	I0203 12:31:11.043294   13136 round_trippers.go:580]     Audit-Id: 10b97eb8-776c-4828-ba1e-2dd45725b8b6
	I0203 12:31:11.043294   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:11.043294   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:11.043294   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:11.043294   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:11.043576   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2158","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3932 chars]
	I0203 12:31:11.043947   13136 pod_ready.go:93] pod "kube-proxy-ggnq7" in "kube-system" namespace has status "Ready":"True"
	I0203 12:31:11.044052   13136 pod_ready.go:82] duration metric: took 400.7227ms for pod "kube-proxy-ggnq7" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:11.044052   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w8wrd" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:11.238789   13136 request.go:632] Waited for 194.6409ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w8wrd
	I0203 12:31:11.238789   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w8wrd
	I0203 12:31:11.238789   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:11.238789   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:11.238789   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:11.243909   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:31:11.243909   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:11.243980   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:11 GMT
	I0203 12:31:11.243980   13136 round_trippers.go:580]     Audit-Id: d228ddc7-c79a-489a-b7d0-2d9f31c8686e
	I0203 12:31:11.243980   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:11.243980   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:11.243980   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:11.243980   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:11.244409   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-w8wrd","generateName":"kube-proxy-","namespace":"kube-system","uid":"f81878fa-528f-4bdf-90ec-83f54166370e","resourceVersion":"1727","creationTimestamp":"2025-02-03T12:12:30Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"04519c88-48ba-439f-bd57-a9c8b286d988","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:12:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04519c88-48ba-439f-bd57-a9c8b286d988\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6418 chars]
	I0203 12:31:11.438475   13136 request.go:632] Waited for 193.8544ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m03
	I0203 12:31:11.438475   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m03
	I0203 12:31:11.438475   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:11.438475   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:11.438475   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:11.443006   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:11.443006   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:11.443006   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:11.443006   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:11.443006   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:11 GMT
	I0203 12:31:11.443006   13136 round_trippers.go:580]     Audit-Id: 053409ed-3659-41e4-b123-5bac1e64643f
	I0203 12:31:11.443006   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:11.443006   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:11.443006   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m03","uid":"1765fbe7-e04a-4337-8284-6152642b17de","resourceVersion":"1838","creationTimestamp":"2025-02-03T12:22:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_22_58_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:22:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4398 chars]
	I0203 12:31:11.443669   13136 pod_ready.go:98] node "multinode-749300-m03" hosting pod "kube-proxy-w8wrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300-m03" has status "Ready":"Unknown"
	I0203 12:31:11.443669   13136 pod_ready.go:82] duration metric: took 399.6126ms for pod "kube-proxy-w8wrd" in "kube-system" namespace to be "Ready" ...
	E0203 12:31:11.443669   13136 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-749300-m03" hosting pod "kube-proxy-w8wrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300-m03" has status "Ready":"Unknown"
	I0203 12:31:11.443669   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:11.639375   13136 request.go:632] Waited for 195.7039ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-749300
	I0203 12:31:11.639375   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-749300
	I0203 12:31:11.639375   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:11.639375   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:11.639375   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:11.643697   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:11.643697   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:11.643758   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:11.643758   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:11.643758   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:11.643758   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:11 GMT
	I0203 12:31:11.643758   13136 round_trippers.go:580]     Audit-Id: 9259a0e1-204c-41f7-b143-c7cb8df5ea00
	I0203 12:31:11.643758   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:11.644338   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-749300","namespace":"kube-system","uid":"8e4c1052-9dca-466d-833b-eff318b977d7","resourceVersion":"1864","creationTimestamp":"2025-02-03T12:04:55Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a4dc8a8db691940bb17375ec22c0921e","kubernetes.io/config.mirror":"a4dc8a8db691940bb17375ec22c0921e","kubernetes.io/config.seen":"2025-02-03T12:04:55.455022345Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:04:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5563 chars]
	I0203 12:31:11.838525   13136 request.go:632] Waited for 193.5487ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:31:11.838918   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:31:11.838918   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:11.838918   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:11.839050   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:11.843718   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:11.844695   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:11.844695   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:11 GMT
	I0203 12:31:11.844695   13136 round_trippers.go:580]     Audit-Id: 05cdf026-770b-47b7-a77e-b43c8521fdb6
	I0203 12:31:11.844695   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:11.844695   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:11.844695   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:11.844695   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:11.845047   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:31:11.845142   13136 pod_ready.go:93] pod "kube-scheduler-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:31:11.845142   13136 pod_ready.go:82] duration metric: took 401.4686ms for pod "kube-scheduler-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:11.845142   13136 pod_ready.go:39] duration metric: took 1.6009997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 12:31:11.845142   13136 system_svc.go:44] waiting for kubelet service to be running ....
	I0203 12:31:11.855058   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 12:31:11.881584   13136 system_svc.go:56] duration metric: took 36.4416ms WaitForService to wait for kubelet
	I0203 12:31:11.881584   13136 kubeadm.go:582] duration metric: took 17.3777323s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 12:31:11.881584   13136 node_conditions.go:102] verifying NodePressure condition ...
	I0203 12:31:12.038527   13136 request.go:632] Waited for 156.9412ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes
	I0203 12:31:12.038527   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes
	I0203 12:31:12.038527   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:12.038527   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:12.038527   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:12.043440   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:12.043440   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:12.043440   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:12.043440   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:12.043440   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:12 GMT
	I0203 12:31:12.043440   13136 round_trippers.go:580]     Audit-Id: 94d3c108-b72a-41b4-aa10-94f8ebbb33cb
	I0203 12:31:12.043440   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:12.043544   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:12.043620   13136 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2162"},"items":[{"metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15605 chars]
	I0203 12:31:12.044565   13136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 12:31:12.044565   13136 node_conditions.go:123] node cpu capacity is 2
	I0203 12:31:12.044565   13136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 12:31:12.044565   13136 node_conditions.go:123] node cpu capacity is 2
	I0203 12:31:12.044565   13136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 12:31:12.044565   13136 node_conditions.go:123] node cpu capacity is 2
	I0203 12:31:12.044565   13136 node_conditions.go:105] duration metric: took 162.9793ms to run NodePressure ...
	I0203 12:31:12.044565   13136 start.go:241] waiting for startup goroutines ...
	I0203 12:31:12.045013   13136 start.go:255] writing updated cluster config ...
	I0203 12:31:12.048785   13136 out.go:201] 
	I0203 12:31:12.052350   13136 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:31:12.064211   13136 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:31:12.065390   13136 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\config.json ...
	I0203 12:31:12.070323   13136 out.go:177] * Starting "multinode-749300-m03" worker node in "multinode-749300" cluster
	I0203 12:31:12.073187   13136 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 12:31:12.073187   13136 cache.go:56] Caching tarball of preloaded images
	I0203 12:31:12.074245   13136 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 12:31:12.074245   13136 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0203 12:31:12.074245   13136 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\config.json ...
	I0203 12:31:12.080973   13136 start.go:360] acquireMachinesLock for multinode-749300-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 12:31:12.081616   13136 start.go:364] duration metric: took 642.7µs to acquireMachinesLock for "multinode-749300-m03"
	I0203 12:31:12.081652   13136 start.go:96] Skipping create...Using existing machine configuration
	I0203 12:31:12.081806   13136 fix.go:54] fixHost starting: m03
	I0203 12:31:12.081964   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m03 ).state
	I0203 12:31:14.027593   13136 main.go:141] libmachine: [stdout =====>] : Off
	
	I0203 12:31:14.028145   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:14.028145   13136 fix.go:112] recreateIfNeeded on multinode-749300-m03: state=Stopped err=<nil>
	W0203 12:31:14.028145   13136 fix.go:138] unexpected machine state, will restart: <nil>
	I0203 12:31:14.031469   13136 out.go:177] * Restarting existing hyperv VM for "multinode-749300-m03" ...
	I0203 12:31:14.035182   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-749300-m03
	I0203 12:31:16.943565   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:31:16.943565   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:16.943565   13136 main.go:141] libmachine: Waiting for host to start...
	I0203 12:31:16.943565   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m03 ).state
	I0203 12:31:19.063291   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:31:19.063714   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:19.063714   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 12:31:21.386102   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:31:21.387981   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:22.388865   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m03 ).state
	I0203 12:31:24.442376   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:31:24.442376   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:24.442376   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 12:31:26.810538   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:31:26.811297   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:27.811882   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m03 ).state
	I0203 12:31:29.825160   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:31:29.825745   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:29.825823   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 12:31:32.144905   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:31:32.144905   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:33.145802   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m03 ).state
	I0203 12:31:35.189490   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:31:35.189490   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:35.189490   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 12:31:37.501094   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:31:37.501094   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:38.501981   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m03 ).state
	I0203 12:31:40.529276   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:31:40.529276   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:40.529716   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 12:31:42.937766   13136 main.go:141] libmachine: [stdout =====>] : 172.25.1.188
	
	I0203 12:31:42.937822   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:42.939705   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m03 ).state
	I0203 12:31:44.904301   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:31:44.905326   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:44.905524   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 12:31:47.295389   13136 main.go:141] libmachine: [stdout =====>] : 172.25.1.188
	
	I0203 12:31:47.295734   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:47.295809   13136 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\config.json ...
	I0203 12:31:47.297945   13136 machine.go:93] provisionDockerMachine start ...
	I0203 12:31:47.297945   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m03 ).state
	I0203 12:31:49.263502   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:31:49.263502   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:49.263587   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 12:31:51.646635   13136 main.go:141] libmachine: [stdout =====>] : 172.25.1.188
	
	I0203 12:31:51.647229   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:51.651422   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:31:51.651997   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.1.188 22 <nil> <nil>}
	I0203 12:31:51.651997   13136 main.go:141] libmachine: About to run SSH command:
	hostname
	I0203 12:31:51.777036   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0203 12:31:51.777036   13136 buildroot.go:166] provisioning hostname "multinode-749300-m03"
	I0203 12:31:51.777036   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m03 ).state
	I0203 12:31:53.735936   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:31:53.735936   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:53.736004   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 12:31:56.074362   13136 main.go:141] libmachine: [stdout =====>] : 172.25.1.188
	
	I0203 12:31:56.074362   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:56.081116   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:31:56.081776   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.1.188 22 <nil> <nil>}
	I0203 12:31:56.081776   13136 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-749300-m03 && echo "multinode-749300-m03" | sudo tee /etc/hostname
	I0203 12:31:56.249106   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-749300-m03
	
	I0203 12:31:56.249178   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m03 ).state
	I0203 12:31:58.207279   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:31:58.207279   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:58.207279   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 12:32:00.570996   13136 main.go:141] libmachine: [stdout =====>] : 172.25.1.188
	
	I0203 12:32:00.570996   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:32:00.574891   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:32:00.575416   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.1.188 22 <nil> <nil>}
	I0203 12:32:00.575416   13136 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-749300-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-749300-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-749300-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 12:32:00.737788   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 12:32:00.737875   13136 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0203 12:32:00.737950   13136 buildroot.go:174] setting up certificates
	I0203 12:32:00.737950   13136 provision.go:84] configureAuth start
	I0203 12:32:00.738015   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m03 ).state

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-749300" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-749300
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-749300: context deadline exceeded (0s)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-749300" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-749300	172.25.1.53
multinode-749300-m02	172.25.8.35
multinode-749300-m03	172.25.0.54

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-749300 -n multinode-749300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-749300 -n multinode-749300: (11.1544535s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 logs -n 25: (12.3235288s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-749300 cp testdata\cp-test.txt                                                                                 | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:16 UTC | 03 Feb 25 12:16 UTC |
	|         | multinode-749300-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-749300 ssh -n                                                                                                  | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:16 UTC | 03 Feb 25 12:16 UTC |
	|         | multinode-749300-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-749300 cp multinode-749300-m02:/home/docker/cp-test.txt                                                        | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:16 UTC | 03 Feb 25 12:16 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile4218837707\001\cp-test_multinode-749300-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-749300 ssh -n                                                                                                  | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:16 UTC | 03 Feb 25 12:16 UTC |
	|         | multinode-749300-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-749300 cp multinode-749300-m02:/home/docker/cp-test.txt                                                        | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:16 UTC | 03 Feb 25 12:17 UTC |
	|         | multinode-749300:/home/docker/cp-test_multinode-749300-m02_multinode-749300.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-749300 ssh -n                                                                                                  | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:17 UTC | 03 Feb 25 12:17 UTC |
	|         | multinode-749300-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-749300 ssh -n multinode-749300 sudo cat                                                                        | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:17 UTC | 03 Feb 25 12:17 UTC |
	|         | /home/docker/cp-test_multinode-749300-m02_multinode-749300.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-749300 cp multinode-749300-m02:/home/docker/cp-test.txt                                                        | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:17 UTC | 03 Feb 25 12:17 UTC |
	|         | multinode-749300-m03:/home/docker/cp-test_multinode-749300-m02_multinode-749300-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-749300 ssh -n                                                                                                  | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:17 UTC | 03 Feb 25 12:17 UTC |
	|         | multinode-749300-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-749300 ssh -n multinode-749300-m03 sudo cat                                                                    | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:17 UTC | 03 Feb 25 12:18 UTC |
	|         | /home/docker/cp-test_multinode-749300-m02_multinode-749300-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-749300 cp testdata\cp-test.txt                                                                                 | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:18 UTC | 03 Feb 25 12:18 UTC |
	|         | multinode-749300-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-749300 ssh -n                                                                                                  | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:18 UTC | 03 Feb 25 12:18 UTC |
	|         | multinode-749300-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-749300 cp multinode-749300-m03:/home/docker/cp-test.txt                                                        | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:18 UTC | 03 Feb 25 12:18 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile4218837707\001\cp-test_multinode-749300-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-749300 ssh -n                                                                                                  | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:18 UTC | 03 Feb 25 12:18 UTC |
	|         | multinode-749300-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-749300 cp multinode-749300-m03:/home/docker/cp-test.txt                                                        | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:18 UTC | 03 Feb 25 12:18 UTC |
	|         | multinode-749300:/home/docker/cp-test_multinode-749300-m03_multinode-749300.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-749300 ssh -n                                                                                                  | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:18 UTC | 03 Feb 25 12:18 UTC |
	|         | multinode-749300-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-749300 ssh -n multinode-749300 sudo cat                                                                        | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:18 UTC | 03 Feb 25 12:19 UTC |
	|         | /home/docker/cp-test_multinode-749300-m03_multinode-749300.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-749300 cp multinode-749300-m03:/home/docker/cp-test.txt                                                        | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:19 UTC | 03 Feb 25 12:19 UTC |
	|         | multinode-749300-m02:/home/docker/cp-test_multinode-749300-m03_multinode-749300-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-749300 ssh -n                                                                                                  | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:19 UTC | 03 Feb 25 12:19 UTC |
	|         | multinode-749300-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-749300 ssh -n multinode-749300-m02 sudo cat                                                                    | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:19 UTC | 03 Feb 25 12:19 UTC |
	|         | /home/docker/cp-test_multinode-749300-m03_multinode-749300-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-749300 node stop m03                                                                                           | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:19 UTC | 03 Feb 25 12:20 UTC |
	| node    | multinode-749300 node start                                                                                              | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:20 UTC | 03 Feb 25 12:23 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-749300                                                                                                 | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:23 UTC |                     |
	| stop    | -p multinode-749300                                                                                                      | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:23 UTC | 03 Feb 25 12:25 UTC |
	| start   | -p multinode-749300                                                                                                      | multinode-749300 | minikube5\jenkins | v1.35.0 | 03 Feb 25 12:25 UTC |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/03 12:25:23
	Running on machine: minikube5
	Binary: Built with gc go1.23.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 12:25:23.595911   13136 out.go:345] Setting OutFile to fd 1416 ...
	I0203 12:25:23.651904   13136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 12:25:23.651904   13136 out.go:358] Setting ErrFile to fd 1980...
	I0203 12:25:23.651904   13136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 12:25:23.670894   13136 out.go:352] Setting JSON to false
	I0203 12:25:23.672902   13136 start.go:129] hostinfo: {"hostname":"minikube5","uptime":170124,"bootTime":1738415398,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5371 Build 19045.5371","kernelVersion":"10.0.19045.5371 Build 19045.5371","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0203 12:25:23.672902   13136 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0203 12:25:23.760924   13136 out.go:177] * [multinode-749300] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	I0203 12:25:23.767076   13136 notify.go:220] Checking for updates...
	I0203 12:25:23.770973   13136 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 12:25:23.873617   13136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 12:25:23.940774   13136 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0203 12:25:24.015441   13136 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 12:25:24.031789   13136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 12:25:24.052654   13136 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:25:24.053221   13136 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 12:25:29.107637   13136 out.go:177] * Using the hyperv driver based on existing profile
	I0203 12:25:29.216091   13136 start.go:297] selected driver: hyperv
	I0203 12:25:29.216091   13136 start.go:901] validating driver "hyperv" against &{Name:multinode-749300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-749300 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.1.53 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.8.35 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.0.54 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio
:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 12:25:29.216454   13136 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 12:25:29.260672   13136 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 12:25:29.261690   13136 cni.go:84] Creating CNI manager for ""
	I0203 12:25:29.261690   13136 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0203 12:25:29.261690   13136 start.go:340] cluster config:
	{Name:multinode-749300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-749300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.1.53 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.8.35 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.0.54 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false lo
gviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 12:25:29.262214   13136 iso.go:125] acquiring lock: {Name:mkae681ee414e9275e9685c6bbf5080b17ead976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 12:25:29.407965   13136 out.go:177] * Starting "multinode-749300" primary control-plane node in "multinode-749300" cluster
	I0203 12:25:29.509319   13136 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 12:25:29.509319   13136 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0203 12:25:29.509319   13136 cache.go:56] Caching tarball of preloaded images
	I0203 12:25:29.511102   13136 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 12:25:29.511305   13136 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0203 12:25:29.511305   13136 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\config.json ...
	I0203 12:25:29.513431   13136 start.go:360] acquireMachinesLock for multinode-749300: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 12:25:29.513431   13136 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-749300"
	I0203 12:25:29.513431   13136 start.go:96] Skipping create...Using existing machine configuration
	I0203 12:25:29.513431   13136 fix.go:54] fixHost starting: 
	I0203 12:25:29.514215   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:25:32.063523   13136 main.go:141] libmachine: [stdout =====>] : Off
	
	I0203 12:25:32.063866   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:25:32.063904   13136 fix.go:112] recreateIfNeeded on multinode-749300: state=Stopped err=<nil>
	W0203 12:25:32.063904   13136 fix.go:138] unexpected machine state, will restart: <nil>
	I0203 12:25:32.157190   13136 out.go:177] * Restarting existing hyperv VM for "multinode-749300" ...
	I0203 12:25:32.214010   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-749300
	I0203 12:25:35.136044   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:25:35.136044   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:25:35.136044   13136 main.go:141] libmachine: Waiting for host to start...
	I0203 12:25:35.136139   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:25:37.183933   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:25:37.183933   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:25:37.184023   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:25:39.503151   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:25:39.503852   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:25:40.504190   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:25:42.499934   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:25:42.500667   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:25:42.500667   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:25:44.806728   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:25:44.806728   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:25:45.807252   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:25:47.833064   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:25:47.833064   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:25:47.834011   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:25:50.147084   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:25:50.147632   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:25:51.148776   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:25:53.166288   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:25:53.166288   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:25:53.166411   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:25:55.443296   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:25:55.443399   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:25:56.444219   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:25:58.447898   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:25:58.447898   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:25:58.448426   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:00.828501   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:00.828594   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:00.830557   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:02.816755   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:02.816841   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:02.816902   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:05.136442   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:05.137211   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:05.137465   13136 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\config.json ...
	I0203 12:26:05.140035   13136 machine.go:93] provisionDockerMachine start ...
	I0203 12:26:05.140212   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:07.066165   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:07.066165   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:07.066272   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:09.383612   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:09.383766   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:09.386904   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:26:09.387570   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.244 22 <nil> <nil>}
	I0203 12:26:09.387570   13136 main.go:141] libmachine: About to run SSH command:
	hostname
	I0203 12:26:09.521739   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0203 12:26:09.521851   13136 buildroot.go:166] provisioning hostname "multinode-749300"
	I0203 12:26:09.521851   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:11.482052   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:11.482052   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:11.482237   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:13.839571   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:13.839571   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:13.846444   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:26:13.846713   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.244 22 <nil> <nil>}
	I0203 12:26:13.846713   13136 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-749300 && echo "multinode-749300" | sudo tee /etc/hostname
	I0203 12:26:13.995994   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-749300
	
	I0203 12:26:13.996102   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:15.938221   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:15.938221   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:15.938319   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:18.288139   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:18.288139   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:18.292035   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:26:18.293062   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.244 22 <nil> <nil>}
	I0203 12:26:18.293062   13136 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-749300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-749300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-749300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 12:26:18.442137   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 12:26:18.442137   13136 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0203 12:26:18.442137   13136 buildroot.go:174] setting up certificates
	I0203 12:26:18.442137   13136 provision.go:84] configureAuth start
	I0203 12:26:18.442137   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:20.426042   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:20.426445   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:20.426445   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:22.761972   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:22.761972   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:22.762679   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:24.725930   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:24.725930   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:24.726188   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:27.054617   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:27.054789   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:27.054866   13136 provision.go:143] copyHostCerts
	I0203 12:26:27.055169   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0203 12:26:27.055169   13136 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0203 12:26:27.055169   13136 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0203 12:26:27.055847   13136 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0203 12:26:27.056446   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0203 12:26:27.057065   13136 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0203 12:26:27.057065   13136 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0203 12:26:27.057065   13136 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0203 12:26:27.057733   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0203 12:26:27.058335   13136 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0203 12:26:27.058335   13136 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0203 12:26:27.058335   13136 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0203 12:26:27.059022   13136 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-749300 san=[127.0.0.1 172.25.12.244 localhost minikube multinode-749300]
	I0203 12:26:27.155879   13136 provision.go:177] copyRemoteCerts
	I0203 12:26:27.162885   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 12:26:27.162885   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:29.103378   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:29.103500   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:29.103500   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:31.431437   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:31.431997   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:31.432322   13136 sshutil.go:53] new ssh client: &{IP:172.25.12.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\id_rsa Username:docker}
	I0203 12:26:31.534958   13136 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3719323s)
	I0203 12:26:31.535037   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0203 12:26:31.535037   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0203 12:26:31.577184   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0203 12:26:31.577591   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0203 12:26:31.624893   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0203 12:26:31.625898   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0203 12:26:31.671371   13136 provision.go:87] duration metric: took 13.2290459s to configureAuth
	I0203 12:26:31.671438   13136 buildroot.go:189] setting minikube options for container-runtime
	I0203 12:26:31.671529   13136 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:26:31.672100   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:33.622749   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:33.622749   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:33.622979   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:35.942649   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:35.942649   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:35.946807   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:26:35.947332   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.244 22 <nil> <nil>}
	I0203 12:26:35.947523   13136 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 12:26:36.084716   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0203 12:26:36.084716   13136 buildroot.go:70] root file system type: tmpfs
	I0203 12:26:36.085014   13136 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 12:26:36.085122   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:38.055389   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:38.055895   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:38.055994   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:40.377952   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:40.378488   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:40.383190   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:26:40.383274   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.244 22 <nil> <nil>}
	I0203 12:26:40.383274   13136 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 12:26:40.538448   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 12:26:40.538705   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:42.452503   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:42.452535   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:42.452602   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:44.786441   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:44.786441   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:44.791468   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:26:44.791602   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.244 22 <nil> <nil>}
	I0203 12:26:44.791602   13136 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 12:26:47.267100   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0203 12:26:47.267100   13136 machine.go:96] duration metric: took 42.1265368s to provisionDockerMachine
	I0203 12:26:47.267100   13136 start.go:293] postStartSetup for "multinode-749300" (driver="hyperv")
	I0203 12:26:47.267100   13136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 12:26:47.275516   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 12:26:47.275516   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:49.222983   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:49.222983   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:49.223539   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:51.572945   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:51.573664   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:51.574034   13136 sshutil.go:53] new ssh client: &{IP:172.25.12.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\id_rsa Username:docker}
	I0203 12:26:51.683153   13136 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4075313s)
	I0203 12:26:51.692286   13136 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 12:26:51.699033   13136 command_runner.go:130] > NAME=Buildroot
	I0203 12:26:51.699127   13136 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0203 12:26:51.699127   13136 command_runner.go:130] > ID=buildroot
	I0203 12:26:51.699127   13136 command_runner.go:130] > VERSION_ID=2023.02.9
	I0203 12:26:51.699127   13136 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0203 12:26:51.699308   13136 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 12:26:51.699335   13136 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0203 12:26:51.699771   13136 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0203 12:26:51.700523   13136 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> 54522.pem in /etc/ssl/certs
	I0203 12:26:51.700594   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /etc/ssl/certs/54522.pem
	I0203 12:26:51.709030   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 12:26:51.726362   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /etc/ssl/certs/54522.pem (1708 bytes)
	I0203 12:26:51.769796   13136 start.go:296] duration metric: took 4.5026457s for postStartSetup
	I0203 12:26:51.769933   13136 fix.go:56] duration metric: took 1m22.2555815s for fixHost
	I0203 12:26:51.770070   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:53.724415   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:53.724415   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:53.724415   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:26:56.093685   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:26:56.093685   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:56.098017   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:26:56.098630   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.244 22 <nil> <nil>}
	I0203 12:26:56.098630   13136 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0203 12:26:56.231749   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738585616.246550531
	
	I0203 12:26:56.231749   13136 fix.go:216] guest clock: 1738585616.246550531
	I0203 12:26:56.231880   13136 fix.go:229] Guest: 2025-02-03 12:26:56.246550531 +0000 UTC Remote: 2025-02-03 12:26:51.7699333 +0000 UTC m=+88.266606101 (delta=4.476617231s)
	I0203 12:26:56.231880   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:26:58.176940   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:26:58.176940   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:26:58.176940   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:27:00.531615   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:27:00.531896   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:27:00.536034   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:27:00.536034   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.244 22 <nil> <nil>}
	I0203 12:27:00.536034   13136 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1738585616
	I0203 12:27:00.674546   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb  3 12:26:56 UTC 2025
	
	I0203 12:27:00.674546   13136 fix.go:236] clock set: Mon Feb  3 12:26:56 UTC 2025
	 (err=<nil>)
	I0203 12:27:00.674546   13136 start.go:83] releasing machines lock for "multinode-749300", held for 1m31.1600955s
	I0203 12:27:00.674546   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:27:02.673223   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:27:02.673223   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:27:02.673766   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:27:04.996525   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:27:04.996839   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:27:05.001161   13136 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0203 12:27:05.001308   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:27:05.009576   13136 ssh_runner.go:195] Run: cat /version.json
	I0203 12:27:05.009639   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:27:07.023280   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:27:07.023280   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:27:07.023382   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:27:07.028962   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:27:07.028962   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:27:07.028962   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:27:09.444979   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:27:09.444979   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:27:09.446032   13136 sshutil.go:53] new ssh client: &{IP:172.25.12.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\id_rsa Username:docker}
	I0203 12:27:09.467324   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:27:09.467324   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:27:09.467816   13136 sshutil.go:53] new ssh client: &{IP:172.25.12.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\id_rsa Username:docker}
	I0203 12:27:09.541099   13136 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0203 12:27:09.541597   13136 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.5403845s)
	W0203 12:27:09.541788   13136 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0203 12:27:09.557958   13136 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0203 12:27:09.557958   13136 ssh_runner.go:235] Completed: cat /version.json: (4.5483313s)
	I0203 12:27:09.565228   13136 ssh_runner.go:195] Run: systemctl --version
	I0203 12:27:09.573515   13136 command_runner.go:130] > systemd 252 (252)
	I0203 12:27:09.574580   13136 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0203 12:27:09.581880   13136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0203 12:27:09.590556   13136 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0203 12:27:09.590556   13136 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 12:27:09.598628   13136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 12:27:09.626887   13136 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0203 12:27:09.627009   13136 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0203 12:27:09.627009   13136 start.go:495] detecting cgroup driver to use...
	I0203 12:27:09.627157   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 12:27:09.660074   13136 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0203 12:27:09.668815   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0203 12:27:09.694919   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0203 12:27:09.706635   13136 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0203 12:27:09.706635   13136 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0203 12:27:09.718483   13136 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 12:27:09.726452   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0203 12:27:09.753426   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 12:27:09.779099   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 12:27:09.807690   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 12:27:09.835424   13136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 12:27:09.864173   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 12:27:09.891574   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0203 12:27:09.920820   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0203 12:27:09.949733   13136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 12:27:09.966097   13136 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 12:27:09.967149   13136 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 12:27:09.976000   13136 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0203 12:27:10.005668   13136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 12:27:10.032858   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:27:10.234944   13136 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 12:27:10.265837   13136 start.go:495] detecting cgroup driver to use...
	I0203 12:27:10.274238   13136 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 12:27:10.294157   13136 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0203 12:27:10.294157   13136 command_runner.go:130] > [Unit]
	I0203 12:27:10.294157   13136 command_runner.go:130] > Description=Docker Application Container Engine
	I0203 12:27:10.294157   13136 command_runner.go:130] > Documentation=https://docs.docker.com
	I0203 12:27:10.294157   13136 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0203 12:27:10.294157   13136 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0203 12:27:10.294157   13136 command_runner.go:130] > StartLimitBurst=3
	I0203 12:27:10.294157   13136 command_runner.go:130] > StartLimitIntervalSec=60
	I0203 12:27:10.294157   13136 command_runner.go:130] > [Service]
	I0203 12:27:10.294157   13136 command_runner.go:130] > Type=notify
	I0203 12:27:10.294157   13136 command_runner.go:130] > Restart=on-failure
	I0203 12:27:10.294157   13136 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0203 12:27:10.294157   13136 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0203 12:27:10.294157   13136 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0203 12:27:10.294157   13136 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0203 12:27:10.294157   13136 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0203 12:27:10.294157   13136 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0203 12:27:10.294157   13136 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0203 12:27:10.294157   13136 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0203 12:27:10.294157   13136 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0203 12:27:10.294157   13136 command_runner.go:130] > ExecStart=
	I0203 12:27:10.294157   13136 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0203 12:27:10.294157   13136 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0203 12:27:10.294157   13136 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0203 12:27:10.294157   13136 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0203 12:27:10.294157   13136 command_runner.go:130] > LimitNOFILE=infinity
	I0203 12:27:10.294685   13136 command_runner.go:130] > LimitNPROC=infinity
	I0203 12:27:10.294685   13136 command_runner.go:130] > LimitCORE=infinity
	I0203 12:27:10.294685   13136 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0203 12:27:10.294731   13136 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0203 12:27:10.294731   13136 command_runner.go:130] > TasksMax=infinity
	I0203 12:27:10.294731   13136 command_runner.go:130] > TimeoutStartSec=0
	I0203 12:27:10.294782   13136 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0203 12:27:10.294782   13136 command_runner.go:130] > Delegate=yes
	I0203 12:27:10.294825   13136 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0203 12:27:10.294825   13136 command_runner.go:130] > KillMode=process
	I0203 12:27:10.294863   13136 command_runner.go:130] > [Install]
	I0203 12:27:10.294863   13136 command_runner.go:130] > WantedBy=multi-user.target
	I0203 12:27:10.303563   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 12:27:10.335505   13136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 12:27:10.377146   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 12:27:10.409002   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 12:27:10.441022   13136 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0203 12:27:10.499742   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 12:27:10.524703   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 12:27:10.559564   13136 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0203 12:27:10.568258   13136 ssh_runner.go:195] Run: which cri-dockerd
	I0203 12:27:10.575372   13136 command_runner.go:130] > /usr/bin/cri-dockerd
	I0203 12:27:10.584155   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0203 12:27:10.601708   13136 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0203 12:27:10.641190   13136 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 12:27:10.835390   13136 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 12:27:11.018343   13136 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 12:27:11.018560   13136 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0203 12:27:11.057570   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:27:11.257278   13136 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 12:27:13.957023   13136 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6997143s)
	I0203 12:27:13.965163   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0203 12:27:13.996412   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 12:27:14.027729   13136 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0203 12:27:14.224705   13136 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 12:27:14.423681   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:27:14.616531   13136 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0203 12:27:14.654124   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 12:27:14.685448   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:27:14.863656   13136 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0203 12:27:14.963201   13136 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0203 12:27:14.973423   13136 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0203 12:27:14.981755   13136 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0203 12:27:14.981826   13136 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0203 12:27:14.981826   13136 command_runner.go:130] > Device: 0,22	Inode: 860         Links: 1
	I0203 12:27:14.981826   13136 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0203 12:27:14.981826   13136 command_runner.go:130] > Access: 2025-02-03 12:27:14.903146812 +0000
	I0203 12:27:14.981826   13136 command_runner.go:130] > Modify: 2025-02-03 12:27:14.903146812 +0000
	I0203 12:27:14.982024   13136 command_runner.go:130] > Change: 2025-02-03 12:27:14.906146829 +0000
	I0203 12:27:14.982024   13136 command_runner.go:130] >  Birth: -
	I0203 12:27:14.982024   13136 start.go:563] Will wait 60s for crictl version
	I0203 12:27:14.991108   13136 ssh_runner.go:195] Run: which crictl
	I0203 12:27:14.997233   13136 command_runner.go:130] > /usr/bin/crictl
	I0203 12:27:15.004269   13136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 12:27:15.058678   13136 command_runner.go:130] > Version:  0.1.0
	I0203 12:27:15.058678   13136 command_runner.go:130] > RuntimeName:  docker
	I0203 12:27:15.058678   13136 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0203 12:27:15.058780   13136 command_runner.go:130] > RuntimeApiVersion:  v1
	I0203 12:27:15.058780   13136 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0203 12:27:15.065303   13136 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 12:27:15.097960   13136 command_runner.go:130] > 27.4.0
	I0203 12:27:15.107089   13136 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 12:27:15.136957   13136 command_runner.go:130] > 27.4.0
	I0203 12:27:15.142877   13136 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0203 12:27:15.142877   13136 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0203 12:27:15.147513   13136 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0203 12:27:15.147513   13136 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0203 12:27:15.147557   13136 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0203 12:27:15.147557   13136 ip.go:211] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:37:32:ac Flags:up|broadcast|multicast|running}
	I0203 12:27:15.149790   13136 ip.go:214] interface addr: fe80::c77d:5c4b:3bd9:9577/64
	I0203 12:27:15.149790   13136 ip.go:214] interface addr: 172.25.0.1/20
	I0203 12:27:15.157169   13136 ssh_runner.go:195] Run: grep 172.25.0.1	host.minikube.internal$ /etc/hosts
	I0203 12:27:15.164236   13136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 12:27:15.185323   13136 kubeadm.go:883] updating cluster {Name:multinode-749300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-749300 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.12.244 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.8.35 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.0.54 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false is
tio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0203 12:27:15.185611   13136 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 12:27:15.192086   13136 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 12:27:15.218589   13136 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.1
	I0203 12:27:15.218589   13136 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.1
	I0203 12:27:15.218693   13136 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.1
	I0203 12:27:15.218693   13136 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.1
	I0203 12:27:15.218693   13136 command_runner.go:130] > kindest/kindnetd:v20241212-9f82dd49
	I0203 12:27:15.218693   13136 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0203 12:27:15.218693   13136 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0203 12:27:15.218693   13136 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0203 12:27:15.218693   13136 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 12:27:15.218693   13136 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0203 12:27:15.218693   13136 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	kindest/kindnetd:v20241212-9f82dd49
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0203 12:27:15.218812   13136 docker.go:619] Images already preloaded, skipping extraction
	I0203 12:27:15.225500   13136 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 12:27:15.251063   13136 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.1
	I0203 12:27:15.251063   13136 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.1
	I0203 12:27:15.251063   13136 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.1
	I0203 12:27:15.251063   13136 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.1
	I0203 12:27:15.251063   13136 command_runner.go:130] > kindest/kindnetd:v20241212-9f82dd49
	I0203 12:27:15.251063   13136 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0203 12:27:15.251063   13136 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0203 12:27:15.251063   13136 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0203 12:27:15.251063   13136 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 12:27:15.251063   13136 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0203 12:27:15.251063   13136 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	kindest/kindnetd:v20241212-9f82dd49
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0203 12:27:15.251063   13136 cache_images.go:84] Images are preloaded, skipping loading
	I0203 12:27:15.251063   13136 kubeadm.go:934] updating node { 172.25.12.244 8443 v1.32.1 docker true true} ...
	I0203 12:27:15.251063   13136 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-749300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.12.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:multinode-749300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0203 12:27:15.258573   13136 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0203 12:27:15.323844   13136 command_runner.go:130] > cgroupfs
	I0203 12:27:15.324015   13136 cni.go:84] Creating CNI manager for ""
	I0203 12:27:15.324015   13136 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0203 12:27:15.324015   13136 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0203 12:27:15.324096   13136 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.12.244 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-749300 NodeName:multinode-749300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.12.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.12.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0203 12:27:15.324276   13136 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.12.244
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-749300"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.25.12.244"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.12.244"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 12:27:15.332063   13136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0203 12:27:15.352823   13136 command_runner.go:130] > kubeadm
	I0203 12:27:15.352823   13136 command_runner.go:130] > kubectl
	I0203 12:27:15.352823   13136 command_runner.go:130] > kubelet
	I0203 12:27:15.352823   13136 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 12:27:15.361623   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 12:27:15.382454   13136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0203 12:27:15.412334   13136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 12:27:15.446820   13136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I0203 12:27:15.487016   13136 ssh_runner.go:195] Run: grep 172.25.12.244	control-plane.minikube.internal$ /etc/hosts
	I0203 12:27:15.493655   13136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.12.244	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 12:27:15.523216   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:27:15.725295   13136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 12:27:15.753811   13136 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300 for IP: 172.25.12.244
	I0203 12:27:15.753867   13136 certs.go:194] generating shared ca certs ...
	I0203 12:27:15.753927   13136 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:27:15.754660   13136 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0203 12:27:15.755081   13136 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0203 12:27:15.755081   13136 certs.go:256] generating profile certs ...
	I0203 12:27:15.755748   13136 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\client.key
	I0203 12:27:15.755858   13136 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.key.a6060888
	I0203 12:27:15.755970   13136 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.crt.a6060888 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.12.244]
	I0203 12:27:16.073923   13136 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.crt.a6060888 ...
	I0203 12:27:16.073923   13136 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.crt.a6060888: {Name:mk40fb8c78e9cf744fa3088bb81814742e8351f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:27:16.075688   13136 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.key.a6060888 ...
	I0203 12:27:16.075688   13136 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.key.a6060888: {Name:mkcd8cc8fae2982ff1b1aaeea5284f71e52afe02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:27:16.076940   13136 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.crt.a6060888 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.crt
	I0203 12:27:16.090519   13136 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.key.a6060888 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.key
	I0203 12:27:16.091518   13136 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\proxy-client.key
	I0203 12:27:16.091518   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0203 12:27:16.092524   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0203 12:27:16.092524   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0203 12:27:16.092524   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0203 12:27:16.092524   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0203 12:27:16.092524   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0203 12:27:16.093541   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0203 12:27:16.093541   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0203 12:27:16.093541   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem (1338 bytes)
	W0203 12:27:16.093541   13136 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452_empty.pem, impossibly tiny 0 bytes
	I0203 12:27:16.093541   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0203 12:27:16.094520   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0203 12:27:16.094520   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0203 12:27:16.094520   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0203 12:27:16.094520   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem (1708 bytes)
	I0203 12:27:16.095519   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /usr/share/ca-certificates/54522.pem
	I0203 12:27:16.095519   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:27:16.095519   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem -> /usr/share/ca-certificates/5452.pem
	I0203 12:27:16.096518   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 12:27:16.141373   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0203 12:27:16.185367   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 12:27:16.230961   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0203 12:27:16.276910   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0203 12:27:16.326911   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0203 12:27:16.373019   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 12:27:16.417192   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0203 12:27:16.462519   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /usr/share/ca-certificates/54522.pem (1708 bytes)
	I0203 12:27:16.510893   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 12:27:16.555132   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem --> /usr/share/ca-certificates/5452.pem (1338 bytes)
	I0203 12:27:16.601150   13136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 12:27:16.639899   13136 ssh_runner.go:195] Run: openssl version
	I0203 12:27:16.648676   13136 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0203 12:27:16.656369   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54522.pem && ln -fs /usr/share/ca-certificates/54522.pem /etc/ssl/certs/54522.pem"
	I0203 12:27:16.685198   13136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54522.pem
	I0203 12:27:16.692611   13136 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb  3 10:45 /usr/share/ca-certificates/54522.pem
	I0203 12:27:16.692611   13136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:45 /usr/share/ca-certificates/54522.pem
	I0203 12:27:16.700927   13136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54522.pem
	I0203 12:27:16.709793   13136 command_runner.go:130] > 3ec20f2e
	I0203 12:27:16.717616   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/54522.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 12:27:16.746453   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 12:27:16.771868   13136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:27:16.779706   13136 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb  3 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:27:16.780156   13136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:27:16.788709   13136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:27:16.797056   13136 command_runner.go:130] > b5213941
	I0203 12:27:16.804460   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 12:27:16.830489   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5452.pem && ln -fs /usr/share/ca-certificates/5452.pem /etc/ssl/certs/5452.pem"
	I0203 12:27:16.857958   13136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5452.pem
	I0203 12:27:16.864028   13136 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb  3 10:45 /usr/share/ca-certificates/5452.pem
	I0203 12:27:16.864028   13136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:45 /usr/share/ca-certificates/5452.pem
	I0203 12:27:16.872029   13136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5452.pem
	I0203 12:27:16.881272   13136 command_runner.go:130] > 51391683
	I0203 12:27:16.888830   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5452.pem /etc/ssl/certs/51391683.0"
	I0203 12:27:16.915968   13136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 12:27:16.923196   13136 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 12:27:16.923281   13136 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0203 12:27:16.923281   13136 command_runner.go:130] > Device: 8,1	Inode: 7336797     Links: 1
	I0203 12:27:16.923281   13136 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0203 12:27:16.923281   13136 command_runner.go:130] > Access: 2025-02-03 12:04:43.777432260 +0000
	I0203 12:27:16.923281   13136 command_runner.go:130] > Modify: 2025-02-03 12:04:43.777432260 +0000
	I0203 12:27:16.923281   13136 command_runner.go:130] > Change: 2025-02-03 12:04:43.777432260 +0000
	I0203 12:27:16.923345   13136 command_runner.go:130] >  Birth: 2025-02-03 12:04:43.777432260 +0000
	I0203 12:27:16.931115   13136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0203 12:27:16.941453   13136 command_runner.go:130] > Certificate will not expire
	I0203 12:27:16.949434   13136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0203 12:27:16.958581   13136 command_runner.go:130] > Certificate will not expire
	I0203 12:27:16.966784   13136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0203 12:27:16.976228   13136 command_runner.go:130] > Certificate will not expire
	I0203 12:27:16.983764   13136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0203 12:27:16.992433   13136 command_runner.go:130] > Certificate will not expire
	I0203 12:27:17.001413   13136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0203 12:27:17.010458   13136 command_runner.go:130] > Certificate will not expire
	I0203 12:27:17.018119   13136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0203 12:27:17.027219   13136 command_runner.go:130] > Certificate will not expire
	I0203 12:27:17.027493   13136 kubeadm.go:392] StartCluster: {Name:multinode-749300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-749300 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.12.244 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.8.35 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.0.54 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio
-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 12:27:17.033846   13136 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0203 12:27:17.068733   13136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 12:27:17.088115   13136 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0203 12:27:17.088115   13136 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0203 12:27:17.088115   13136 command_runner.go:130] > /var/lib/minikube/etcd:
	I0203 12:27:17.088115   13136 command_runner.go:130] > member
	I0203 12:27:17.088409   13136 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0203 12:27:17.088487   13136 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0203 12:27:17.096441   13136 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0203 12:27:17.114662   13136 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0203 12:27:17.115714   13136 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-749300" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 12:27:17.116281   13136 kubeconfig.go:62] C:\Users\jenkins.minikube5\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-749300" cluster setting kubeconfig missing "multinode-749300" context setting]
	I0203 12:27:17.116975   13136 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:27:17.132871   13136 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 12:27:17.134231   13136 kapi.go:59] client config for multinode-749300: &rest.Config{Host:"https://172.25.12.244:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-749300/client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-749300/client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x219e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 12:27:17.135337   13136 cert_rotation.go:140] Starting client certificate rotation controller
	I0203 12:27:17.143595   13136 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0203 12:27:17.163603   13136 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0203 12:27:17.163603   13136 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0203 12:27:17.163603   13136 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0203 12:27:17.163603   13136 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I0203 12:27:17.163603   13136 command_runner.go:130] >  kind: InitConfiguration
	I0203 12:27:17.163603   13136 command_runner.go:130] >  localAPIEndpoint:
	I0203 12:27:17.163603   13136 command_runner.go:130] > -  advertiseAddress: 172.25.1.53
	I0203 12:27:17.163603   13136 command_runner.go:130] > +  advertiseAddress: 172.25.12.244
	I0203 12:27:17.163603   13136 command_runner.go:130] >    bindPort: 8443
	I0203 12:27:17.163603   13136 command_runner.go:130] >  bootstrapTokens:
	I0203 12:27:17.163603   13136 command_runner.go:130] >    - groups:
	I0203 12:27:17.163603   13136 command_runner.go:130] > @@ -15,13 +15,13 @@
	I0203 12:27:17.163603   13136 command_runner.go:130] >    name: "multinode-749300"
	I0203 12:27:17.163603   13136 command_runner.go:130] >    kubeletExtraArgs:
	I0203 12:27:17.163603   13136 command_runner.go:130] >      - name: "node-ip"
	I0203 12:27:17.163603   13136 command_runner.go:130] > -      value: "172.25.1.53"
	I0203 12:27:17.163603   13136 command_runner.go:130] > +      value: "172.25.12.244"
	I0203 12:27:17.163603   13136 command_runner.go:130] >    taints: []
	I0203 12:27:17.163603   13136 command_runner.go:130] >  ---
	I0203 12:27:17.163603   13136 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I0203 12:27:17.163603   13136 command_runner.go:130] >  kind: ClusterConfiguration
	I0203 12:27:17.163603   13136 command_runner.go:130] >  apiServer:
	I0203 12:27:17.163603   13136 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.25.1.53"]
	I0203 12:27:17.163603   13136 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.25.12.244"]
	I0203 12:27:17.163603   13136 command_runner.go:130] >    extraArgs:
	I0203 12:27:17.163603   13136 command_runner.go:130] >      - name: "enable-admission-plugins"
	I0203 12:27:17.164346   13136 command_runner.go:130] >        value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0203 12:27:17.164346   13136 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.25.1.53
	+  advertiseAddress: 172.25.12.244
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -15,13 +15,13 @@
	   name: "multinode-749300"
	   kubeletExtraArgs:
	     - name: "node-ip"
	-      value: "172.25.1.53"
	+      value: "172.25.12.244"
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.25.1.53"]
	+  certSANs: ["127.0.0.1", "localhost", "172.25.12.244"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	       value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	
	-- /stdout --
	I0203 12:27:17.164346   13136 kubeadm.go:1160] stopping kube-system containers ...
	I0203 12:27:17.172115   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0203 12:27:17.202050   13136 command_runner.go:130] > fe91a8d012ae
	I0203 12:27:17.202050   13136 command_runner.go:130] > a6484d4fc4d7
	I0203 12:27:17.202050   13136 command_runner.go:130] > a166f3c8776d
	I0203 12:27:17.202050   13136 command_runner.go:130] > 26e5557dc32c
	I0203 12:27:17.202050   13136 command_runner.go:130] > fab2d9be6b5c
	I0203 12:27:17.202050   13136 command_runner.go:130] > c6dc514e98f6
	I0203 12:27:17.202050   13136 command_runner.go:130] > cb49b32ba085
	I0203 12:27:17.202050   13136 command_runner.go:130] > 1ff01fa7d8c6
	I0203 12:27:17.202050   13136 command_runner.go:130] > 8ade10c0fb09
	I0203 12:27:17.202050   13136 command_runner.go:130] > 88c40ca9aa3c
	I0203 12:27:17.202050   13136 command_runner.go:130] > ebc67da1b9e9
	I0203 12:27:17.202050   13136 command_runner.go:130] > e3efb81aa459
	I0203 12:27:17.202050   13136 command_runner.go:130] > b1b473818438
	I0203 12:27:17.202050   13136 command_runner.go:130] > d8d9e598659f
	I0203 12:27:17.202050   13136 command_runner.go:130] > 16d03cfd685d
	I0203 12:27:17.202050   13136 command_runner.go:130] > d3c93fcfaa46
	I0203 12:27:17.202050   13136 docker.go:483] Stopping containers: [fe91a8d012ae a6484d4fc4d7 a166f3c8776d 26e5557dc32c fab2d9be6b5c c6dc514e98f6 cb49b32ba085 1ff01fa7d8c6 8ade10c0fb09 88c40ca9aa3c ebc67da1b9e9 e3efb81aa459 b1b473818438 d8d9e598659f 16d03cfd685d d3c93fcfaa46]
	I0203 12:27:17.208947   13136 ssh_runner.go:195] Run: docker stop fe91a8d012ae a6484d4fc4d7 a166f3c8776d 26e5557dc32c fab2d9be6b5c c6dc514e98f6 cb49b32ba085 1ff01fa7d8c6 8ade10c0fb09 88c40ca9aa3c ebc67da1b9e9 e3efb81aa459 b1b473818438 d8d9e598659f 16d03cfd685d d3c93fcfaa46
	I0203 12:27:17.235967   13136 command_runner.go:130] > fe91a8d012ae
	I0203 12:27:17.235967   13136 command_runner.go:130] > a6484d4fc4d7
	I0203 12:27:17.235967   13136 command_runner.go:130] > a166f3c8776d
	I0203 12:27:17.236382   13136 command_runner.go:130] > 26e5557dc32c
	I0203 12:27:17.236881   13136 command_runner.go:130] > fab2d9be6b5c
	I0203 12:27:17.236881   13136 command_runner.go:130] > c6dc514e98f6
	I0203 12:27:17.236881   13136 command_runner.go:130] > cb49b32ba085
	I0203 12:27:17.236881   13136 command_runner.go:130] > 1ff01fa7d8c6
	I0203 12:27:17.236881   13136 command_runner.go:130] > 8ade10c0fb09
	I0203 12:27:17.237090   13136 command_runner.go:130] > 88c40ca9aa3c
	I0203 12:27:17.237475   13136 command_runner.go:130] > ebc67da1b9e9
	I0203 12:27:17.237475   13136 command_runner.go:130] > e3efb81aa459
	I0203 12:27:17.237475   13136 command_runner.go:130] > b1b473818438
	I0203 12:27:17.237475   13136 command_runner.go:130] > d8d9e598659f
	I0203 12:27:17.237475   13136 command_runner.go:130] > 16d03cfd685d
	I0203 12:27:17.237475   13136 command_runner.go:130] > d3c93fcfaa46
	I0203 12:27:17.248126   13136 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0203 12:27:17.283854   13136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 12:27:17.301586   13136 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0203 12:27:17.301679   13136 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0203 12:27:17.301745   13136 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0203 12:27:17.301745   13136 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 12:27:17.301745   13136 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 12:27:17.301745   13136 kubeadm.go:157] found existing configuration files:
	
	I0203 12:27:17.309587   13136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 12:27:17.326960   13136 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 12:27:17.327045   13136 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0203 12:27:17.336246   13136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0203 12:27:17.360990   13136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 12:27:17.377859   13136 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 12:27:17.377859   13136 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0203 12:27:17.388022   13136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0203 12:27:17.413284   13136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 12:27:17.429587   13136 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 12:27:17.429683   13136 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0203 12:27:17.438139   13136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 12:27:17.462144   13136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 12:27:17.479394   13136 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 12:27:17.479394   13136 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0203 12:27:17.488457   13136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 12:27:17.512799   13136 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 12:27:17.530695   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 12:27:17.759091   13136 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 12:27:17.759188   13136 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0203 12:27:17.759188   13136 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0203 12:27:17.759188   13136 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0203 12:27:17.759188   13136 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0203 12:27:17.759188   13136 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0203 12:27:17.759188   13136 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0203 12:27:17.759188   13136 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0203 12:27:17.759285   13136 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0203 12:27:17.759285   13136 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0203 12:27:17.759285   13136 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0203 12:27:17.759285   13136 command_runner.go:130] > [certs] Using the existing "sa" key
	I0203 12:27:17.759285   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 12:27:19.246920   13136 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 12:27:19.246920   13136 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 12:27:19.246920   13136 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0203 12:27:19.246920   13136 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 12:27:19.246920   13136 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 12:27:19.246920   13136 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 12:27:19.246920   13136 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.4876179s)
	I0203 12:27:19.246920   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0203 12:27:19.546550   13136 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 12:27:19.546550   13136 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 12:27:19.546550   13136 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0203 12:27:19.546550   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 12:27:19.638798   13136 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 12:27:19.638798   13136 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 12:27:19.638798   13136 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 12:27:19.638798   13136 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 12:27:19.638798   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0203 12:27:19.721204   13136 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 12:27:19.725187   13136 api_server.go:52] waiting for apiserver process to appear ...
	I0203 12:27:19.733212   13136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 12:27:20.237681   13136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 12:27:20.734957   13136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 12:27:21.236573   13136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 12:27:21.736191   13136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 12:27:21.760186   13136 command_runner.go:130] > 1987
	I0203 12:27:21.761926   13136 api_server.go:72] duration metric: took 2.0366371s to wait for apiserver process to appear ...
	I0203 12:27:21.761926   13136 api_server.go:88] waiting for apiserver healthz status ...
	I0203 12:27:21.761991   13136 api_server.go:253] Checking apiserver healthz at https://172.25.12.244:8443/healthz ...
	I0203 12:27:24.805810   13136 api_server.go:279] https://172.25.12.244:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0203 12:27:24.805810   13136 api_server.go:103] status: https://172.25.12.244:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0203 12:27:24.805810   13136 api_server.go:253] Checking apiserver healthz at https://172.25.12.244:8443/healthz ...
	I0203 12:27:24.892495   13136 api_server.go:279] https://172.25.12.244:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0203 12:27:24.892606   13136 api_server.go:103] status: https://172.25.12.244:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0203 12:27:25.263040   13136 api_server.go:253] Checking apiserver healthz at https://172.25.12.244:8443/healthz ...
	I0203 12:27:25.272440   13136 api_server.go:279] https://172.25.12.244:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 12:27:25.272772   13136 api_server.go:103] status: https://172.25.12.244:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 12:27:25.762168   13136 api_server.go:253] Checking apiserver healthz at https://172.25.12.244:8443/healthz ...
	I0203 12:27:25.775975   13136 api_server.go:279] https://172.25.12.244:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 12:27:25.775975   13136 api_server.go:103] status: https://172.25.12.244:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 12:27:26.262974   13136 api_server.go:253] Checking apiserver healthz at https://172.25.12.244:8443/healthz ...
	I0203 12:27:26.271990   13136 api_server.go:279] https://172.25.12.244:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 12:27:26.271990   13136 api_server.go:103] status: https://172.25.12.244:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 12:27:26.763287   13136 api_server.go:253] Checking apiserver healthz at https://172.25.12.244:8443/healthz ...
	I0203 12:27:26.770907   13136 api_server.go:279] https://172.25.12.244:8443/healthz returned 200:
	ok
	I0203 12:27:26.771574   13136 round_trippers.go:463] GET https://172.25.12.244:8443/version
	I0203 12:27:26.771621   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:26.771654   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:26.771654   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:26.782427   13136 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0203 12:27:26.782427   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:26.782427   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:26.782427   13136 round_trippers.go:580]     Content-Length: 263
	I0203 12:27:26.782427   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:26 GMT
	I0203 12:27:26.782427   13136 round_trippers.go:580]     Audit-Id: 88e97992-82b7-456c-adfd-c35de1f165c8
	I0203 12:27:26.782427   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:26.782427   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:26.782427   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:26.782427   13136 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "32",
	  "gitVersion": "v1.32.1",
	  "gitCommit": "e9c9be4007d1664e68796af02b8978640d2c1b26",
	  "gitTreeState": "clean",
	  "buildDate": "2025-01-15T14:31:55Z",
	  "goVersion": "go1.23.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0203 12:27:26.782427   13136 api_server.go:141] control plane version: v1.32.1
	I0203 12:27:26.782427   13136 api_server.go:131] duration metric: took 5.0204447s to wait for apiserver health ...
	I0203 12:27:26.782427   13136 cni.go:84] Creating CNI manager for ""
	I0203 12:27:26.782427   13136 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0203 12:27:26.785304   13136 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0203 12:27:26.797151   13136 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0203 12:27:26.811312   13136 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0203 12:27:26.811312   13136 command_runner.go:130] >   Size: 3103192   	Blocks: 6064       IO Block: 4096   regular file
	I0203 12:27:26.811312   13136 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0203 12:27:26.811312   13136 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0203 12:27:26.811312   13136 command_runner.go:130] > Access: 2025-02-03 12:26:01.341859678 +0000
	I0203 12:27:26.811312   13136 command_runner.go:130] > Modify: 2025-01-14 09:03:58.000000000 +0000
	I0203 12:27:26.811312   13136 command_runner.go:130] > Change: 2025-02-03 12:25:49.033000000 +0000
	I0203 12:27:26.811312   13136 command_runner.go:130] >  Birth: -
	I0203 12:27:26.811312   13136 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0203 12:27:26.811312   13136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0203 12:27:26.891517   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0203 12:27:27.962216   13136 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0203 12:27:27.962216   13136 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0203 12:27:27.962216   13136 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0203 12:27:27.962216   13136 command_runner.go:130] > daemonset.apps/kindnet configured
	I0203 12:27:27.962216   13136 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.0706875s)
	I0203 12:27:27.962216   13136 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 12:27:27.962216   13136 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0203 12:27:27.962216   13136 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0203 12:27:27.962827   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods
	I0203 12:27:27.962827   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:27.962902   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:27.962902   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:27.969361   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:27:27.969361   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:27.969361   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:27.969361   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:27.969361   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:27.969361   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:27 GMT
	I0203 12:27:27.969361   13136 round_trippers.go:580]     Audit-Id: 4b7e5a12-4ad9-4445-bd24-cef0f8ecc3a0
	I0203 12:27:27.969361   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:27.970591   13136 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1831"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 91050 chars]
	I0203 12:27:27.975815   13136 system_pods.go:59] 12 kube-system pods found
	I0203 12:27:27.976790   13136 system_pods.go:61] "coredns-668d6bf9bc-v2gkp" [c94a77a3-456e-41d7-b9ad-7aa97e0264a7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0203 12:27:27.976790   13136 system_pods.go:61] "etcd-multinode-749300" [a956084b-f454-4ef5-8fed-7c189cb74ab0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0203 12:27:27.976790   13136 system_pods.go:61] "kindnet-bckxx" [006a41d1-55d5-479a-856f-5670f4ae6588] Running
	I0203 12:27:27.976790   13136 system_pods.go:61] "kindnet-dc9wq" [debecd3d-64fd-46e8-8d28-ca97e75cfcfe] Running
	I0203 12:27:27.976790   13136 system_pods.go:61] "kindnet-h6m57" [67c155d5-fb9b-42f5-8e64-865c44a5d4e6] Running
	I0203 12:27:27.976790   13136 system_pods.go:61] "kube-apiserver-multinode-749300" [72513861-07f4-4533-8f55-8b3cce215b4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0203 12:27:27.976790   13136 system_pods.go:61] "kube-controller-manager-multinode-749300" [63c0818c-a0e6-40d1-b0c4-1cd633c91afb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0203 12:27:27.976790   13136 system_pods.go:61] "kube-proxy-9g92t" [1709b874-4fee-41f5-8d30-24912b2fa725] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0203 12:27:27.976790   13136 system_pods.go:61] "kube-proxy-ggnq7" [63bc9e77-90e3-40c5-9b49-e95d2bfd7426] Running
	I0203 12:27:27.976790   13136 system_pods.go:61] "kube-proxy-w8wrd" [f81878fa-528f-4bdf-90ec-83f54166370e] Running
	I0203 12:27:27.976790   13136 system_pods.go:61] "kube-scheduler-multinode-749300" [8e4c1052-9dca-466d-833b-eff318b977d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0203 12:27:27.976790   13136 system_pods.go:61] "storage-provisioner" [4c991afa-7bb0-4d52-bded-22d68037b5ae] Running
	I0203 12:27:27.976790   13136 system_pods.go:74] duration metric: took 14.5737ms to wait for pod list to return data ...
	I0203 12:27:27.976790   13136 node_conditions.go:102] verifying NodePressure condition ...
	I0203 12:27:27.976790   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes
	I0203 12:27:27.976790   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:27.976790   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:27.976790   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:27.981491   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:27.981491   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:27.981491   13136 round_trippers.go:580]     Audit-Id: f38bc849-5eec-47c6-b79f-6f65cc41c97e
	I0203 12:27:27.981491   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:27.981491   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:27.981491   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:27.981491   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:27.981491   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:27 GMT
	I0203 12:27:27.981491   13136 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1831"},"items":[{"metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1751","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15625 chars]
	I0203 12:27:27.983178   13136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 12:27:27.983178   13136 node_conditions.go:123] node cpu capacity is 2
	I0203 12:27:27.983252   13136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 12:27:27.983252   13136 node_conditions.go:123] node cpu capacity is 2
	I0203 12:27:27.983252   13136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 12:27:27.983252   13136 node_conditions.go:123] node cpu capacity is 2
	I0203 12:27:27.983252   13136 node_conditions.go:105] duration metric: took 6.4613ms to run NodePressure ...
	I0203 12:27:27.983252   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 12:27:28.578159   13136 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0203 12:27:28.578159   13136 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0203 12:27:28.578256   13136 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0203 12:27:28.578336   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0203 12:27:28.578336   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.578336   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.578336   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.581675   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:28.581746   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.581746   13136 round_trippers.go:580]     Audit-Id: 01db7b09-7ad2-4996-911e-f77a5f75dbee
	I0203 12:27:28.581746   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.581746   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.581746   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.581746   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.581746   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.581924   13136 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1844"},"items":[{"metadata":{"name":"etcd-multinode-749300","namespace":"kube-system","uid":"a956084b-f454-4ef5-8fed-7c189cb74ab0","resourceVersion":"1803","creationTimestamp":"2025-02-03T12:27:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.12.244:2379","kubernetes.io/config.hash":"f85eb916773a482447e41aa40aaff233","kubernetes.io/config.mirror":"f85eb916773a482447e41aa40aaff233","kubernetes.io/config.seen":"2025-02-03T12:27:19.750780815Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:27:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 31685 chars]
	I0203 12:27:28.584130   13136 kubeadm.go:739] kubelet initialised
	I0203 12:27:28.584207   13136 kubeadm.go:740] duration metric: took 5.8739ms waiting for restarted kubelet to initialise ...
	I0203 12:27:28.584207   13136 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 12:27:28.584283   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods
	I0203 12:27:28.584359   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.584359   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.584359   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.598596   13136 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0203 12:27:28.598596   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.598596   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.598596   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.598596   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.598596   13136 round_trippers.go:580]     Audit-Id: 20e171f0-0ab9-41de-8ac2-a9b4f5bb53c9
	I0203 12:27:28.598596   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.598596   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.600213   13136 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1845"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90859 chars]
	I0203 12:27:28.603009   13136 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace to be "Ready" ...
	I0203 12:27:28.603009   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:27:28.603009   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.603009   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.604010   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.609179   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:28.609179   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.609179   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.609179   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.609179   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.609179   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.609179   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.609248   13136 round_trippers.go:580]     Audit-Id: bddd1d6a-350b-421a-81c0-0dfd169a8647
	I0203 12:27:28.609312   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:27:28.610019   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:28.610019   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.610019   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.610019   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.615051   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:28.615051   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.615051   13136 round_trippers.go:580]     Audit-Id: 6981f24b-5d83-4ed7-be9a-b49d11381fa0
	I0203 12:27:28.615051   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.615051   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.615131   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.615131   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.615131   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.615397   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:28.615845   13136 pod_ready.go:98] node "multinode-749300" hosting pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:28.615872   13136 pod_ready.go:82] duration metric: took 12.8635ms for pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace to be "Ready" ...
	E0203 12:27:28.615872   13136 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-749300" hosting pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:28.615872   13136 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:27:28.615872   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-749300
	I0203 12:27:28.615872   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.615872   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.615872   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.632686   13136 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0203 12:27:28.632791   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.632791   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.632791   13136 round_trippers.go:580]     Audit-Id: 692cc1ac-b2dd-4851-b773-29173b51855c
	I0203 12:27:28.632791   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.632791   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.632791   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.632791   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.633074   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-749300","namespace":"kube-system","uid":"a956084b-f454-4ef5-8fed-7c189cb74ab0","resourceVersion":"1803","creationTimestamp":"2025-02-03T12:27:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.12.244:2379","kubernetes.io/config.hash":"f85eb916773a482447e41aa40aaff233","kubernetes.io/config.mirror":"f85eb916773a482447e41aa40aaff233","kubernetes.io/config.seen":"2025-02-03T12:27:19.750780815Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:27:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6830 chars]
	I0203 12:27:28.633703   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:28.633703   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.633703   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.633703   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.640278   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:27:28.641359   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.641359   13136 round_trippers.go:580]     Audit-Id: bf8e2cf8-d33e-458d-bdb7-9408d32eb7b0
	I0203 12:27:28.641421   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.641421   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.641421   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.641421   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.641421   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.641566   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:28.641790   13136 pod_ready.go:98] node "multinode-749300" hosting pod "etcd-multinode-749300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:28.641790   13136 pod_ready.go:82] duration metric: took 25.9173ms for pod "etcd-multinode-749300" in "kube-system" namespace to be "Ready" ...
	E0203 12:27:28.641790   13136 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-749300" hosting pod "etcd-multinode-749300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:28.641790   13136 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:27:28.641790   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-749300
	I0203 12:27:28.641790   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.641790   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.641790   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.647352   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:28.647352   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.647352   13136 round_trippers.go:580]     Audit-Id: 02090c82-5dfe-4079-beea-8e3aa8909e25
	I0203 12:27:28.647352   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.647352   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.647352   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.647352   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.647352   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.647352   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-749300","namespace":"kube-system","uid":"72513861-07f4-4533-8f55-8b3cce215b4c","resourceVersion":"1804","creationTimestamp":"2025-02-03T12:27:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.12.244:8443","kubernetes.io/config.hash":"20275825c8d44051c01f8d920b297acd","kubernetes.io/config.mirror":"20275825c8d44051c01f8d920b297acd","kubernetes.io/config.seen":"2025-02-03T12:27:19.750137111Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:27:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8283 chars]
	I0203 12:27:28.648387   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:28.648387   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.648387   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.648387   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.662371   13136 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0203 12:27:28.663319   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.663386   13136 round_trippers.go:580]     Audit-Id: b04c6cb4-42b5-4afb-9b14-79243ccf21e2
	I0203 12:27:28.663386   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.663386   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.663386   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.663386   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.663386   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.663386   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:28.663386   13136 pod_ready.go:98] node "multinode-749300" hosting pod "kube-apiserver-multinode-749300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:28.663386   13136 pod_ready.go:82] duration metric: took 21.5954ms for pod "kube-apiserver-multinode-749300" in "kube-system" namespace to be "Ready" ...
	E0203 12:27:28.663386   13136 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-749300" hosting pod "kube-apiserver-multinode-749300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:28.663386   13136 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:27:28.663386   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-749300
	I0203 12:27:28.663386   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.663386   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.663386   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.670390   13136 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 12:27:28.670538   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.670584   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.670584   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.670584   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.670584   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.670584   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.670584   13136 round_trippers.go:580]     Audit-Id: 4270eb5b-fb6c-4928-8e23-644a50c48faf
	I0203 12:27:28.670878   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-749300","namespace":"kube-system","uid":"63c0818c-a0e6-40d1-b0c4-1cd633c91afb","resourceVersion":"1800","creationTimestamp":"2025-02-03T12:04:55Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c25845f184856fc216b76acafcf34ee9","kubernetes.io/config.mirror":"c25845f184856fc216b76acafcf34ee9","kubernetes.io/config.seen":"2025-02-03T12:04:55.455020645Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:04:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7732 chars]
	I0203 12:27:28.671523   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:28.671583   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.671583   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.671583   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.673667   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:27:28.673667   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.673667   13136 round_trippers.go:580]     Audit-Id: 48a91d29-157a-4efa-a3a4-8a5598956637
	I0203 12:27:28.673667   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.673667   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.673667   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.673667   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.673667   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.674247   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:28.674747   13136 pod_ready.go:98] node "multinode-749300" hosting pod "kube-controller-manager-multinode-749300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:28.674747   13136 pod_ready.go:82] duration metric: took 11.3617ms for pod "kube-controller-manager-multinode-749300" in "kube-system" namespace to be "Ready" ...
	E0203 12:27:28.674811   13136 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-749300" hosting pod "kube-controller-manager-multinode-749300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:28.674811   13136 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9g92t" in "kube-system" namespace to be "Ready" ...
	I0203 12:27:28.778670   13136 request.go:632] Waited for 103.8016ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g92t
	I0203 12:27:28.778670   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g92t
	I0203 12:27:28.778670   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.778670   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.778670   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.782680   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:28.783079   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.783079   13136 round_trippers.go:580]     Audit-Id: f9fed813-67cc-4bfa-819a-8ea2ab62c5da
	I0203 12:27:28.783079   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.783079   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.783079   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.783079   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.783079   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.783667   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9g92t","generateName":"kube-proxy-","namespace":"kube-system","uid":"1709b874-4fee-41f5-8d30-24912b2fa725","resourceVersion":"1844","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"04519c88-48ba-439f-bd57-a9c8b286d988","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04519c88-48ba-439f-bd57-a9c8b286d988\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6400 chars]
	I0203 12:27:28.978613   13136 request.go:632] Waited for 193.25ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:28.979036   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:28.979036   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:28.979036   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:28.979036   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:28.982210   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:28.982670   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:28.982670   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:28.982670   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:28.982670   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:28.982670   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:28.982670   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:28 GMT
	I0203 12:27:28.982670   13136 round_trippers.go:580]     Audit-Id: 8830ef73-1d9e-4a80-a295-0387fdd97530
	I0203 12:27:28.982889   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:28.983335   13136 pod_ready.go:98] node "multinode-749300" hosting pod "kube-proxy-9g92t" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:28.983335   13136 pod_ready.go:82] duration metric: took 308.5208ms for pod "kube-proxy-9g92t" in "kube-system" namespace to be "Ready" ...
	E0203 12:27:28.983421   13136 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-749300" hosting pod "kube-proxy-9g92t" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:28.983421   13136 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-ggnq7" in "kube-system" namespace to be "Ready" ...
	I0203 12:27:29.178563   13136 request.go:632] Waited for 195.0576ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggnq7
	I0203 12:27:29.178563   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggnq7
	I0203 12:27:29.178563   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:29.178563   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:29.178563   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:29.183465   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:29.183465   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:29.183465   13136 round_trippers.go:580]     Audit-Id: a4811cf4-2cc2-46bf-b6f5-5d8c30923327
	I0203 12:27:29.183465   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:29.183465   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:29.183465   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:29.183465   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:29.183465   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:29 GMT
	I0203 12:27:29.183795   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ggnq7","generateName":"kube-proxy-","namespace":"kube-system","uid":"63bc9e77-90e3-40c5-9b49-e95d2bfd7426","resourceVersion":"625","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"04519c88-48ba-439f-bd57-a9c8b286d988","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04519c88-48ba-439f-bd57-a9c8b286d988\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6192 chars]
	I0203 12:27:29.379120   13136 request.go:632] Waited for 194.4708ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:27:29.379120   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:27:29.379120   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:29.379120   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:29.379120   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:29.382603   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:29.383573   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:29.383573   13136 round_trippers.go:580]     Audit-Id: c725a749-bc11-4ced-a102-1790fd5816ba
	I0203 12:27:29.383573   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:29.383573   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:29.383573   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:29.383573   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:29.383573   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:29 GMT
	I0203 12:27:29.383710   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"1637","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3825 chars]
	I0203 12:27:29.384288   13136 pod_ready.go:93] pod "kube-proxy-ggnq7" in "kube-system" namespace has status "Ready":"True"
	I0203 12:27:29.384288   13136 pod_ready.go:82] duration metric: took 400.8629ms for pod "kube-proxy-ggnq7" in "kube-system" namespace to be "Ready" ...
	I0203 12:27:29.384288   13136 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-w8wrd" in "kube-system" namespace to be "Ready" ...
	I0203 12:27:29.578710   13136 request.go:632] Waited for 194.3449ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w8wrd
	I0203 12:27:29.578710   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w8wrd
	I0203 12:27:29.578710   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:29.578710   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:29.578710   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:29.582860   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:29.582937   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:29.582937   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:29.582937   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:29.582937   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:29.582937   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:29 GMT
	I0203 12:27:29.582937   13136 round_trippers.go:580]     Audit-Id: 8beed07b-295a-4536-b88c-b8fc072b7160
	I0203 12:27:29.582937   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:29.583136   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-w8wrd","generateName":"kube-proxy-","namespace":"kube-system","uid":"f81878fa-528f-4bdf-90ec-83f54166370e","resourceVersion":"1727","creationTimestamp":"2025-02-03T12:12:30Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"04519c88-48ba-439f-bd57-a9c8b286d988","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:12:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04519c88-48ba-439f-bd57-a9c8b286d988\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6418 chars]
	I0203 12:27:29.779113   13136 request.go:632] Waited for 195.2874ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m03
	I0203 12:27:29.779113   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m03
	I0203 12:27:29.779499   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:29.779499   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:29.779499   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:29.783724   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:29.783724   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:29.783724   13136 round_trippers.go:580]     Audit-Id: 076d6630-48d5-4d1d-bbb2-8b6cb1857772
	I0203 12:27:29.783724   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:29.783724   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:29.783724   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:29.783724   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:29.783724   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:29 GMT
	I0203 12:27:29.784124   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m03","uid":"1765fbe7-e04a-4337-8284-6152642b17de","resourceVersion":"1838","creationTimestamp":"2025-02-03T12:22:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_22_58_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:22:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4398 chars]
	I0203 12:27:29.784635   13136 pod_ready.go:98] node "multinode-749300-m03" hosting pod "kube-proxy-w8wrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300-m03" has status "Ready":"Unknown"
	I0203 12:27:29.784705   13136 pod_ready.go:82] duration metric: took 400.4125ms for pod "kube-proxy-w8wrd" in "kube-system" namespace to be "Ready" ...
	E0203 12:27:29.784705   13136 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-749300-m03" hosting pod "kube-proxy-w8wrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300-m03" has status "Ready":"Unknown"
	I0203 12:27:29.784705   13136 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:27:29.978718   13136 request.go:632] Waited for 193.9433ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-749300
	I0203 12:27:29.978718   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-749300
	I0203 12:27:29.978718   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:29.978718   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:29.978718   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:29.983979   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:29.983979   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:29.983979   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:29.983979   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:30 GMT
	I0203 12:27:29.983979   13136 round_trippers.go:580]     Audit-Id: 51270b46-733a-4c19-8837-073f6f0e1762
	I0203 12:27:29.983979   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:29.983979   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:29.984082   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:29.984238   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-749300","namespace":"kube-system","uid":"8e4c1052-9dca-466d-833b-eff318b977d7","resourceVersion":"1802","creationTimestamp":"2025-02-03T12:04:55Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a4dc8a8db691940bb17375ec22c0921e","kubernetes.io/config.mirror":"a4dc8a8db691940bb17375ec22c0921e","kubernetes.io/config.seen":"2025-02-03T12:04:55.455022345Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:04:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5807 chars]
	I0203 12:27:30.179107   13136 request.go:632] Waited for 194.4324ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:30.179107   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:30.179107   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:30.179107   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:30.179107   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:30.183443   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:30.184177   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:30.184177   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:30.184177   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:30.184177   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:30.184177   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:30 GMT
	I0203 12:27:30.184177   13136 round_trippers.go:580]     Audit-Id: a1b177a2-c298-4872-9b36-d2d11c68f6f5
	I0203 12:27:30.184177   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:30.184557   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:30.185089   13136 pod_ready.go:98] node "multinode-749300" hosting pod "kube-scheduler-multinode-749300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:30.185089   13136 pod_ready.go:82] duration metric: took 400.3795ms for pod "kube-scheduler-multinode-749300" in "kube-system" namespace to be "Ready" ...
	E0203 12:27:30.185089   13136 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-749300" hosting pod "kube-scheduler-multinode-749300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300" has status "Ready":"False"
	I0203 12:27:30.185157   13136 pod_ready.go:39] duration metric: took 1.600933s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 12:27:30.185189   13136 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0203 12:27:30.202790   13136 command_runner.go:130] > -16
	I0203 12:27:30.202790   13136 ops.go:34] apiserver oom_adj: -16
	I0203 12:27:30.202790   13136 kubeadm.go:597] duration metric: took 13.1141562s to restartPrimaryControlPlane
	I0203 12:27:30.202943   13136 kubeadm.go:394] duration metric: took 13.1751491s to StartCluster
	I0203 12:27:30.202943   13136 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:27:30.203202   13136 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 12:27:30.204742   13136 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:27:30.206192   13136 start.go:235] Will wait 6m0s for node &{Name: IP:172.25.12.244 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 12:27:30.206192   13136 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0203 12:27:30.206477   13136 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:27:30.209434   13136 out.go:177] * Verifying Kubernetes components...
	I0203 12:27:30.213569   13136 out.go:177] * Enabled addons: 
	I0203 12:27:30.220007   13136 addons.go:514] duration metric: took 13.8147ms for enable addons: enabled=[]
	I0203 12:27:30.224360   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:27:30.476666   13136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 12:27:30.507616   13136 node_ready.go:35] waiting up to 6m0s for node "multinode-749300" to be "Ready" ...
	I0203 12:27:30.507840   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:30.507840   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:30.507923   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:30.507923   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:30.510776   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:27:30.511553   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:30.511553   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:30 GMT
	I0203 12:27:30.511553   13136 round_trippers.go:580]     Audit-Id: c7a5e1c8-57d7-4efa-a0fc-09f3d91e8274
	I0203 12:27:30.511553   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:30.511553   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:30.511553   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:30.511553   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:30.511706   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:31.008151   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:31.008544   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:31.008544   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:31.008544   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:31.013207   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:31.013282   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:31.013282   13136 round_trippers.go:580]     Audit-Id: c045ce5c-99e2-4667-a8cd-9ec9b890debd
	I0203 12:27:31.013282   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:31.013282   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:31.013282   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:31.013282   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:31.013282   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:31 GMT
	I0203 12:27:31.013527   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:31.508250   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:31.508250   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:31.508250   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:31.508250   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:31.512972   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:31.512972   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:31.512972   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:31.512972   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:31.512972   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:31.512972   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:31 GMT
	I0203 12:27:31.512972   13136 round_trippers.go:580]     Audit-Id: 36d46f36-b463-4858-8031-8598ced3026b
	I0203 12:27:31.512972   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:31.512972   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:32.008477   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:32.008477   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:32.008477   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:32.008477   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:32.012796   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:32.012796   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:32.012796   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:32.012796   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:32.012796   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:32 GMT
	I0203 12:27:32.012796   13136 round_trippers.go:580]     Audit-Id: 416fdfdd-4fad-43d5-8f41-e19a6424eff4
	I0203 12:27:32.012796   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:32.012796   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:32.012796   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:32.507842   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:32.507842   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:32.507842   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:32.507842   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:32.512911   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:32.512974   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:32.513021   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:32.513021   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:32.513021   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:32 GMT
	I0203 12:27:32.513021   13136 round_trippers.go:580]     Audit-Id: f2ff10b3-e952-4a87-9901-aab74a1df40f
	I0203 12:27:32.513021   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:32.513021   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:32.513145   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:32.513788   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:33.007951   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:33.008424   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:33.008424   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:33.008424   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:33.015835   13136 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 12:27:33.015835   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:33.015835   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:33.015835   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:33.015835   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:33.015835   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:33.015835   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:33 GMT
	I0203 12:27:33.015835   13136 round_trippers.go:580]     Audit-Id: bd59e2d8-37d0-434f-b318-1504d80acb12
	I0203 12:27:33.015835   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:33.509325   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:33.509325   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:33.509325   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:33.509325   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:33.512967   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:33.513083   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:33.513083   13136 round_trippers.go:580]     Audit-Id: b022ddbe-3ddb-4415-bbb9-a03268cbe56e
	I0203 12:27:33.513193   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:33.513193   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:33.513193   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:33.513193   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:33.513193   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:33 GMT
	I0203 12:27:33.513430   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:34.008319   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:34.008319   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:34.008319   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:34.008319   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:34.012486   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:34.012486   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:34.012486   13136 round_trippers.go:580]     Audit-Id: 14c5d798-b001-4970-92bd-db80f8ec2436
	I0203 12:27:34.012486   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:34.012486   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:34.012486   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:34.012486   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:34.012486   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:34 GMT
	I0203 12:27:34.012486   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:34.508430   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:34.508430   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:34.508430   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:34.508502   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:34.513022   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:34.513022   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:34.513022   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:34 GMT
	I0203 12:27:34.513022   13136 round_trippers.go:580]     Audit-Id: dfe2c8c7-ca2e-4b8e-8d18-1f2eb795336a
	I0203 12:27:34.513022   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:34.513022   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:34.513022   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:34.513141   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:34.513189   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:35.008273   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:35.008273   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:35.008273   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:35.008273   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:35.013658   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:35.013722   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:35.013722   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:35.013722   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:35.013763   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:35.013763   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:35 GMT
	I0203 12:27:35.013763   13136 round_trippers.go:580]     Audit-Id: 3e607dde-7638-4620-bbb7-9605c7a969a6
	I0203 12:27:35.013763   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:35.014030   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:35.014515   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:35.508732   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:35.508732   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:35.508732   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:35.508732   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:35.513437   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:35.513516   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:35.513516   13136 round_trippers.go:580]     Audit-Id: 822b8e93-c5eb-4054-b02c-67f2e5e1cce7
	I0203 12:27:35.513516   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:35.513516   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:35.513516   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:35.513605   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:35.513605   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:35 GMT
	I0203 12:27:35.513718   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:36.008116   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:36.008116   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:36.008116   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:36.008762   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:36.014511   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:36.014553   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:36.014553   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:36 GMT
	I0203 12:27:36.014553   13136 round_trippers.go:580]     Audit-Id: 41863f20-a232-4db3-9c62-a992f9cd8125
	I0203 12:27:36.014553   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:36.014553   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:36.014553   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:36.014553   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:36.014553   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:36.508775   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:36.508775   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:36.508775   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:36.508775   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:36.513523   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:36.513523   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:36.513523   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:36.513523   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:36.513523   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:36.513523   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:36 GMT
	I0203 12:27:36.513523   13136 round_trippers.go:580]     Audit-Id: d161e549-ac93-4829-b36c-8c3c5fcb9c82
	I0203 12:27:36.513523   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:36.513700   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:37.008800   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:37.008867   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:37.008867   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:37.008867   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:37.013383   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:37.013383   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:37.013383   13136 round_trippers.go:580]     Audit-Id: 6947c5d2-bd18-4339-ab7c-355a94dca74d
	I0203 12:27:37.013383   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:37.013383   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:37.013383   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:37.013383   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:37.013383   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:37 GMT
	I0203 12:27:37.013383   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:37.508109   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:37.508109   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:37.508109   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:37.508109   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:37.512690   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:37.512690   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:37.512690   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:37.512773   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:37.512773   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:37.512773   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:37 GMT
	I0203 12:27:37.512773   13136 round_trippers.go:580]     Audit-Id: bc41146d-8e9b-4a46-bbd4-721ac375fbd6
	I0203 12:27:37.512773   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:37.513121   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:37.513676   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:38.008797   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:38.008797   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:38.008797   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:38.008797   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:38.013740   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:38.013740   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:38.013845   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:38.013845   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:38.013845   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:38 GMT
	I0203 12:27:38.013845   13136 round_trippers.go:580]     Audit-Id: b4412b13-8d20-4017-932b-eaae432cb5c2
	I0203 12:27:38.013845   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:38.013845   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:38.014051   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1834","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0203 12:27:38.509516   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:38.509516   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:38.509516   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:38.509516   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:38.518281   13136 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0203 12:27:38.518281   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:38.518281   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:38.518281   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:38.518281   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:38.518281   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:38 GMT
	I0203 12:27:38.518281   13136 round_trippers.go:580]     Audit-Id: c295925e-b3e8-443e-a4eb-4840cd95329a
	I0203 12:27:38.518281   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:38.518281   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:39.008348   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:39.008348   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:39.008348   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:39.008348   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:39.013316   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:39.013316   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:39.013316   13136 round_trippers.go:580]     Audit-Id: 16c03dfe-dacc-4c41-bb0c-6a1e5586acc7
	I0203 12:27:39.013316   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:39.013316   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:39.013316   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:39.013316   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:39.013316   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:39 GMT
	I0203 12:27:39.013316   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:39.508417   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:39.508417   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:39.508417   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:39.508417   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:39.511841   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:39.511841   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:39.511841   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:39.511841   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:39.511841   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:39.511841   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:39 GMT
	I0203 12:27:39.511841   13136 round_trippers.go:580]     Audit-Id: 8e2055ac-aa56-42cb-ab22-a749b07a4bca
	I0203 12:27:39.511841   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:39.511841   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:40.008463   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:40.008463   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:40.008463   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:40.008463   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:40.011940   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:40.011940   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:40.011940   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:40.011940   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:40 GMT
	I0203 12:27:40.011940   13136 round_trippers.go:580]     Audit-Id: 6d8ed845-c69e-4ffa-b397-8c5b40203683
	I0203 12:27:40.011940   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:40.011940   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:40.011940   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:40.011940   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:40.012987   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:40.508720   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:40.508720   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:40.508720   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:40.508720   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:40.512753   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:40.512925   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:40.512994   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:40.513062   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:40 GMT
	I0203 12:27:40.513164   13136 round_trippers.go:580]     Audit-Id: a49ce82b-3b9b-4898-98fe-94c27801bf47
	I0203 12:27:40.513186   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:40.513186   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:40.513186   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:40.513186   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:41.008687   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:41.008687   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:41.008687   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:41.008687   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:41.013470   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:41.013470   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:41.013470   13136 round_trippers.go:580]     Audit-Id: e4fbfd5c-f296-47e8-8312-d997b5d82ce7
	I0203 12:27:41.013470   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:41.013470   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:41.013470   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:41.013470   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:41.013470   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:41 GMT
	I0203 12:27:41.013470   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:41.508313   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:41.508313   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:41.508313   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:41.508313   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:41.514266   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:41.514266   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:41.514378   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:41.514378   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:41 GMT
	I0203 12:27:41.514378   13136 round_trippers.go:580]     Audit-Id: df0d383e-d08c-4869-983e-f843dcb93919
	I0203 12:27:41.514378   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:41.514378   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:41.514378   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:41.514514   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:42.008729   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:42.008729   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:42.008729   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:42.008729   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:42.015343   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:27:42.015343   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:42.015441   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:42.015441   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:42.015441   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:42 GMT
	I0203 12:27:42.015441   13136 round_trippers.go:580]     Audit-Id: 936fb3f8-94f1-4466-83fa-901ea373139c
	I0203 12:27:42.015441   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:42.015441   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:42.015759   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:42.015880   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:42.508548   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:42.508548   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:42.508548   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:42.508548   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:42.513590   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:42.513716   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:42.513716   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:42 GMT
	I0203 12:27:42.513716   13136 round_trippers.go:580]     Audit-Id: fdc4180b-6cd5-49e8-8095-7fd73de99d23
	I0203 12:27:42.513716   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:42.513716   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:42.513716   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:42.513716   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:42.513962   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:43.008067   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:43.008067   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:43.008067   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:43.008067   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:43.012432   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:43.012432   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:43.012432   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:43.012432   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:43 GMT
	I0203 12:27:43.012432   13136 round_trippers.go:580]     Audit-Id: 6f3232af-ecf1-4e39-9843-205db8d993a0
	I0203 12:27:43.012432   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:43.012432   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:43.012432   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:43.012757   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:43.508716   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:43.508716   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:43.508716   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:43.508716   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:43.513113   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:43.513113   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:43.513188   13136 round_trippers.go:580]     Audit-Id: 46d84bfe-5909-46e7-9f54-398c874ed7d0
	I0203 12:27:43.513188   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:43.513188   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:43.513188   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:43.513188   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:43.513188   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:43 GMT
	I0203 12:27:43.513358   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:44.008524   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:44.008524   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:44.008524   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:44.008524   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:44.012491   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:44.012587   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:44.012587   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:44.012587   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:44.012587   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:44.012587   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:44 GMT
	I0203 12:27:44.012587   13136 round_trippers.go:580]     Audit-Id: 61476df4-83e4-49a7-800a-7b30f83515e2
	I0203 12:27:44.012664   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:44.012810   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:44.508162   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:44.508162   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:44.508162   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:44.508162   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:44.512359   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:44.512452   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:44.512452   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:44 GMT
	I0203 12:27:44.512452   13136 round_trippers.go:580]     Audit-Id: df381f70-7d17-451f-8a3a-e2f1443be16c
	I0203 12:27:44.512512   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:44.512512   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:44.512512   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:44.512512   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:44.512886   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:44.513312   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:45.008904   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:45.008904   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:45.008904   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:45.008904   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:45.012475   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:45.012710   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:45.012710   13136 round_trippers.go:580]     Audit-Id: a10b4f61-0364-4b8f-92a9-a5b26aa407a7
	I0203 12:27:45.012710   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:45.012710   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:45.012710   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:45.012710   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:45.012710   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:45 GMT
	I0203 12:27:45.013109   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:45.509060   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:45.509060   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:45.509060   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:45.509060   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:45.512772   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:45.513425   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:45.513425   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:45.513425   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:45.513425   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:45.513425   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:45 GMT
	I0203 12:27:45.513425   13136 round_trippers.go:580]     Audit-Id: da278f94-e32d-4aef-bb62-f626e6360621
	I0203 12:27:45.513425   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:45.513603   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:46.008387   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:46.008387   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:46.008387   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:46.008387   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:46.012738   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:46.013019   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:46.013019   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:46.013019   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:46.013019   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:46.013019   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:46.013019   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:46 GMT
	I0203 12:27:46.013019   13136 round_trippers.go:580]     Audit-Id: c6d2145a-3b10-460f-a7eb-7b173102cd21
	I0203 12:27:46.013234   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:46.508548   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:46.508548   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:46.508548   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:46.508548   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:46.512557   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:46.512641   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:46.512641   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:46.512641   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:46.512641   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:46.512641   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:46.512716   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:46 GMT
	I0203 12:27:46.512716   13136 round_trippers.go:580]     Audit-Id: bab84ac9-cf15-40c0-b73e-223248cc06fd
	I0203 12:27:46.512871   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:46.513452   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:47.008692   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:47.008692   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:47.008692   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:47.008692   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:47.013229   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:47.013342   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:47.013342   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:47.013342   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:47.013342   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:47.013342   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:47 GMT
	I0203 12:27:47.013342   13136 round_trippers.go:580]     Audit-Id: 3e4372ec-e4f9-48bb-ac64-169b7246c511
	I0203 12:27:47.013445   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:47.013668   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:47.508406   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:47.508406   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:47.508406   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:47.508406   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:47.513195   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:47.513195   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:47.513195   13136 round_trippers.go:580]     Audit-Id: 5f64dfb6-9f52-4dde-9cbb-e06ed63d72e6
	I0203 12:27:47.513195   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:47.513195   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:47.513195   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:47.513195   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:47.513195   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:47 GMT
	I0203 12:27:47.513450   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:48.008483   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:48.008483   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:48.008483   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:48.008483   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:48.013170   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:48.013170   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:48.013170   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:48.013170   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:48.013170   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:48.013170   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:48 GMT
	I0203 12:27:48.013170   13136 round_trippers.go:580]     Audit-Id: 20eb517a-e5ae-43d0-be0c-baf2625f7c39
	I0203 12:27:48.013170   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:48.013546   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:48.509096   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:48.509096   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:48.509096   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:48.509096   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:48.513384   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:48.513469   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:48.513469   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:48.513469   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:48.513469   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:48 GMT
	I0203 12:27:48.513469   13136 round_trippers.go:580]     Audit-Id: d0dce5d1-6ca2-466e-8f74-aa259aa5a2b7
	I0203 12:27:48.513469   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:48.513469   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:48.513642   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:48.513841   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:49.008688   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:49.008688   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:49.008688   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:49.008688   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:49.012736   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:49.012736   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:49.012736   13136 round_trippers.go:580]     Audit-Id: 63d253c5-bdc5-49bb-943b-38f0802a49b2
	I0203 12:27:49.012736   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:49.012736   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:49.012736   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:49.012736   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:49.012876   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:49 GMT
	I0203 12:27:49.013023   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:49.508487   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:49.508487   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:49.508487   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:49.508487   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:49.513724   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:49.513724   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:49.513724   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:49.513811   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:49 GMT
	I0203 12:27:49.513811   13136 round_trippers.go:580]     Audit-Id: 1e581171-7253-4d90-a7e3-0156223c62a3
	I0203 12:27:49.513811   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:49.513811   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:49.513811   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:49.513991   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:50.008296   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:50.008296   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:50.008296   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:50.008296   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:50.012266   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:50.012352   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:50.012352   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:50.012352   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:50 GMT
	I0203 12:27:50.012352   13136 round_trippers.go:580]     Audit-Id: 691892a2-abfe-4446-868e-6e24d46d15e3
	I0203 12:27:50.012352   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:50.012352   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:50.012352   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:50.012604   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:50.508688   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:50.508688   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:50.508688   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:50.508688   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:50.512566   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:50.512566   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:50.512566   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:50 GMT
	I0203 12:27:50.512566   13136 round_trippers.go:580]     Audit-Id: 6bc1d9f4-3369-4342-baf3-cced55a145b5
	I0203 12:27:50.512566   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:50.512566   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:50.512566   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:50.512566   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:50.512566   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:51.008955   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:51.008955   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:51.008955   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:51.009113   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:51.012878   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:51.012878   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:51.012878   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:51.012878   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:51 GMT
	I0203 12:27:51.012878   13136 round_trippers.go:580]     Audit-Id: 62902a23-896e-4efe-9940-464512caab66
	I0203 12:27:51.012878   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:51.012878   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:51.012878   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:51.013200   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:51.013795   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:51.509425   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:51.509425   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:51.509425   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:51.509425   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:51.514147   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:51.514147   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:51.514147   13136 round_trippers.go:580]     Audit-Id: 39e97727-7c5f-4679-b5fd-5fd96dbc75cc
	I0203 12:27:51.514147   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:51.514147   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:51.514147   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:51.514147   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:51.514147   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:51 GMT
	I0203 12:27:51.514147   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:52.009219   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:52.009219   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:52.009219   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:52.009219   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:52.013565   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:52.013565   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:52.013565   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:52.013565   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:52 GMT
	I0203 12:27:52.013565   13136 round_trippers.go:580]     Audit-Id: 05f7f6d1-4ebf-4be1-8976-7bfdf9bbab45
	I0203 12:27:52.013565   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:52.013565   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:52.013565   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:52.013882   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:52.508024   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:52.508024   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:52.508024   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:52.508024   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:52.512934   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:52.512934   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:52.512934   13136 round_trippers.go:580]     Audit-Id: 64dd654a-9f9e-4a9e-ace1-82437fa2cbcb
	I0203 12:27:52.512934   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:52.512934   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:52.512934   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:52.512934   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:52.512934   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:52 GMT
	I0203 12:27:52.513058   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:53.008211   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:53.008211   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:53.008211   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:53.008211   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:53.013220   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:53.013302   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:53.013302   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:53.013302   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:53.013302   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:53.013370   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:53 GMT
	I0203 12:27:53.013370   13136 round_trippers.go:580]     Audit-Id: b5738682-fb39-4d7f-9a31-2085e1c652d9
	I0203 12:27:53.013370   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:53.014259   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:53.015161   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:53.508050   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:53.508050   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:53.508050   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:53.508050   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:53.512709   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:53.512709   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:53.512709   13136 round_trippers.go:580]     Audit-Id: 938d616a-fe61-43ec-8408-83f01412535c
	I0203 12:27:53.512709   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:53.512709   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:53.512709   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:53.512709   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:53.512709   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:53 GMT
	I0203 12:27:53.512709   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:54.008647   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:54.008647   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:54.008647   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:54.008647   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:54.015318   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:27:54.015318   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:54.015318   13136 round_trippers.go:580]     Audit-Id: 82442fc1-750a-4a4a-b139-1ce5b2a7ae3f
	I0203 12:27:54.015318   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:54.015318   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:54.015318   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:54.015318   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:54.015318   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:54 GMT
	I0203 12:27:54.016289   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:54.508517   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:54.508517   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:54.508517   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:54.508517   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:54.513698   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:54.513698   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:54.513698   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:54.513698   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:54.513698   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:54 GMT
	I0203 12:27:54.513698   13136 round_trippers.go:580]     Audit-Id: fdbd15d4-ea3c-4b5c-99b9-807ccaa99c59
	I0203 12:27:54.513698   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:54.513698   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:54.513996   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:55.009506   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:55.009506   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:55.009506   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:55.009506   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:55.012848   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:55.013541   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:55.013541   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:55.013541   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:55 GMT
	I0203 12:27:55.013541   13136 round_trippers.go:580]     Audit-Id: 93ecdf96-b217-4d83-b1f3-301eee2a5b80
	I0203 12:27:55.013541   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:55.013541   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:55.013541   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:55.013965   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:55.509270   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:55.509270   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:55.509270   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:55.509270   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:55.513470   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:55.513470   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:55.513470   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:55.513470   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:55.513470   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:55.513470   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:55.513470   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:55 GMT
	I0203 12:27:55.513470   13136 round_trippers.go:580]     Audit-Id: bdfa3e70-0afa-4651-bccc-61ba71596f53
	I0203 12:27:55.513707   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:55.514158   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:56.008464   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:56.008464   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:56.008464   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:56.008464   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:56.012050   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:56.012836   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:56.012836   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:56.012836   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:56.012836   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:56.012836   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:56.012836   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:56 GMT
	I0203 12:27:56.012836   13136 round_trippers.go:580]     Audit-Id: e78281b6-b1b6-4e7b-a09c-8e475f4467f8
	I0203 12:27:56.013172   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:56.508769   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:56.509218   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:56.509218   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:56.509218   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:56.513314   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:56.513314   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:56.513314   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:56.513314   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:56.513314   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:56.513314   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:56 GMT
	I0203 12:27:56.513314   13136 round_trippers.go:580]     Audit-Id: 2374f17c-fc65-4046-b6a4-67f5de6848cd
	I0203 12:27:56.513314   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:56.513551   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:57.008174   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:57.008704   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:57.008704   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:57.008798   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:57.013183   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:57.013183   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:57.013183   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:57.013183   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:57.013183   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:57.013183   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:57.013183   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:57 GMT
	I0203 12:27:57.013183   13136 round_trippers.go:580]     Audit-Id: 16b26188-4b38-443a-bc9c-65cae13df402
	I0203 12:27:57.013183   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:57.508490   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:57.508490   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:57.508490   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:57.508490   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:57.513854   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:27:57.513941   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:57.513941   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:57.513941   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:57 GMT
	I0203 12:27:57.513941   13136 round_trippers.go:580]     Audit-Id: 2c833c05-7ff3-4928-82a8-ce94cd51da6d
	I0203 12:27:57.513941   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:57.513941   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:57.513941   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:57.514188   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:57.514651   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:27:58.008783   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:58.009336   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:58.009412   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:58.009412   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:58.013395   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:58.013395   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:58.013395   13136 round_trippers.go:580]     Audit-Id: fd0d4521-dcd0-4e83-aef7-7320e7ae1452
	I0203 12:27:58.013395   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:58.013395   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:58.013395   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:58.013395   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:58.013395   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:58 GMT
	I0203 12:27:58.013944   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:58.508368   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:58.508368   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:58.508368   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:58.508368   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:58.512765   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:27:58.512765   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:58.512868   13136 round_trippers.go:580]     Audit-Id: df361e64-1cf8-4191-ac50-2e1e8fd87c7a
	I0203 12:27:58.512868   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:58.512868   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:58.512868   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:58.512868   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:58.512868   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:58 GMT
	I0203 12:27:58.513446   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:59.009707   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:59.009707   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:59.009707   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:59.009707   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:59.013350   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:59.013677   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:59.013677   13136 round_trippers.go:580]     Audit-Id: 48798ddc-6609-4231-b642-709b6dad2dd0
	I0203 12:27:59.013677   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:59.013677   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:59.013677   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:59.013677   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:59.013677   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:59 GMT
	I0203 12:27:59.014107   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:27:59.508318   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:27:59.508318   13136 round_trippers.go:469] Request Headers:
	I0203 12:27:59.508318   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:27:59.508318   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:27:59.512290   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:27:59.513068   13136 round_trippers.go:577] Response Headers:
	I0203 12:27:59.513068   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:27:59.513068   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:27:59.513068   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:27:59.513068   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:27:59 GMT
	I0203 12:27:59.513068   13136 round_trippers.go:580]     Audit-Id: 0ca1f275-dd98-491b-a84f-7572cab5c452
	I0203 12:27:59.513068   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:27:59.513553   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:00.008521   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:00.008521   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:00.008521   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:00.008521   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:00.012064   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:28:00.012130   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:00.012130   13136 round_trippers.go:580]     Audit-Id: 3d319fe3-e65a-4fbe-8c38-3008d12152e7
	I0203 12:28:00.012130   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:00.012130   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:00.012130   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:00.012130   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:00.012130   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:00 GMT
	I0203 12:28:00.012367   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:00.012549   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:28:00.509392   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:00.509392   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:00.509392   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:00.509392   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:00.512755   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:00.512856   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:00.512856   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:00 GMT
	I0203 12:28:00.512856   13136 round_trippers.go:580]     Audit-Id: 3ea11098-c1b8-4eab-b9a4-3d0d4dbb90aa
	I0203 12:28:00.512856   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:00.512856   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:00.512856   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:00.512856   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:00.512950   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:01.008682   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:01.008682   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:01.008682   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:01.008682   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:01.016352   13136 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 12:28:01.016352   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:01.016352   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:01 GMT
	I0203 12:28:01.016352   13136 round_trippers.go:580]     Audit-Id: 8a84e20e-dc82-47d5-9d5e-0905557c1514
	I0203 12:28:01.016352   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:01.016352   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:01.016352   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:01.016352   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:01.016352   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:01.508721   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:01.509252   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:01.509252   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:01.509252   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:01.516730   13136 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 12:28:01.516784   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:01.516784   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:01 GMT
	I0203 12:28:01.516784   13136 round_trippers.go:580]     Audit-Id: 3ea996e0-e3eb-4bc1-a92d-3bf54b479449
	I0203 12:28:01.516784   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:01.516784   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:01.516784   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:01.516784   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:01.516977   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:02.009018   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:02.009018   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:02.009018   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:02.009098   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:02.013337   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:02.013337   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:02.013337   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:02 GMT
	I0203 12:28:02.013337   13136 round_trippers.go:580]     Audit-Id: 9e5e229e-d9db-4a1c-955d-f3cd5b0af3a3
	I0203 12:28:02.013460   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:02.013460   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:02.013460   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:02.013460   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:02.013816   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:02.014475   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:28:02.509448   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:02.509641   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:02.509641   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:02.509641   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:02.513259   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:02.513479   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:02.513479   13136 round_trippers.go:580]     Audit-Id: 5bc50d86-1c36-4cc7-8ef2-08c22c2908c8
	I0203 12:28:02.513479   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:02.513479   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:02.513479   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:02.513479   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:02.513552   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:02 GMT
	I0203 12:28:02.514260   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:03.008816   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:03.008816   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:03.008816   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:03.008816   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:03.012748   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:03.012748   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:03.012748   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:03.012748   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:03.012748   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:03.012748   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:03 GMT
	I0203 12:28:03.012748   13136 round_trippers.go:580]     Audit-Id: dbb06c18-bb02-46df-b609-ce147006b383
	I0203 12:28:03.012748   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:03.012748   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:03.508798   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:03.508798   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:03.508798   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:03.508798   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:03.513589   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:03.513589   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:03.513708   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:03.513708   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:03 GMT
	I0203 12:28:03.513708   13136 round_trippers.go:580]     Audit-Id: 3b3a7ba5-ef96-42d0-85a6-7962c4ee09be
	I0203 12:28:03.513708   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:03.513708   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:03.513708   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:03.514155   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:04.008842   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:04.008842   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:04.008842   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:04.008842   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:04.013812   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:04.013933   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:04.013933   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:04.013933   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:04.013933   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:04.013933   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:04 GMT
	I0203 12:28:04.013933   13136 round_trippers.go:580]     Audit-Id: a4d8bead-83c5-44a7-8177-28f09a06eef6
	I0203 12:28:04.013933   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:04.014132   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:04.014669   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:28:04.509026   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:04.509026   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:04.509026   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:04.509026   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:04.513040   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:04.513040   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:04.513137   13136 round_trippers.go:580]     Audit-Id: d38da522-9d3a-4b0a-a485-9bf6c1caa63e
	I0203 12:28:04.513137   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:04.513137   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:04.513137   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:04.513137   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:04.513137   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:04 GMT
	I0203 12:28:04.513439   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:05.008147   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:05.008147   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:05.008147   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:05.008147   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:05.012272   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:05.012272   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:05.012272   13136 round_trippers.go:580]     Audit-Id: eb14a766-b801-4164-87c1-418fd6ff7dc1
	I0203 12:28:05.012272   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:05.012272   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:05.012272   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:05.012272   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:05.012272   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:05 GMT
	I0203 12:28:05.012272   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:05.508453   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:05.508453   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:05.508453   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:05.508453   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:05.514452   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:05.514452   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:05.514452   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:05 GMT
	I0203 12:28:05.514452   13136 round_trippers.go:580]     Audit-Id: a0e10902-ac7c-459a-b40a-d00f51a0aed4
	I0203 12:28:05.514452   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:05.514452   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:05.514452   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:05.514452   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:05.515054   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:06.008978   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:06.008978   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:06.008978   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:06.008978   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:06.012693   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:06.012765   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:06.012765   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:06.012765   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:06.012765   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:06.012765   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:06.012765   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:06 GMT
	I0203 12:28:06.012765   13136 round_trippers.go:580]     Audit-Id: f934ecd8-a45b-47d8-8f4b-dc2f2ee95d99
	I0203 12:28:06.012993   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:06.508408   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:06.508408   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:06.508408   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:06.508408   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:06.513025   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:06.513103   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:06.513103   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:06 GMT
	I0203 12:28:06.513103   13136 round_trippers.go:580]     Audit-Id: 45d0fb9a-d203-4768-854e-347f31d5e48c
	I0203 12:28:06.513103   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:06.513103   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:06.513103   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:06.513103   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:06.513232   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:06.513770   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:28:07.008916   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:07.008916   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:07.008916   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:07.008916   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:07.012625   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:07.012625   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:07.012625   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:07.012625   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:07 GMT
	I0203 12:28:07.012625   13136 round_trippers.go:580]     Audit-Id: 8a9764e0-0561-4936-9a0d-f576b572237b
	I0203 12:28:07.012625   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:07.012625   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:07.012625   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:07.012625   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:07.509113   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:07.509113   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:07.509113   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:07.509113   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:07.513408   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:07.513408   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:07.513408   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:07.513408   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:07.513408   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:07 GMT
	I0203 12:28:07.513408   13136 round_trippers.go:580]     Audit-Id: e9888c70-1ffa-4ef8-8bd6-69434f50eb3e
	I0203 12:28:07.513408   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:07.513408   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:07.513780   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:08.009046   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:08.009046   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:08.009046   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:08.009046   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:08.016662   13136 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 12:28:08.016725   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:08.016725   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:08.016725   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:08 GMT
	I0203 12:28:08.016725   13136 round_trippers.go:580]     Audit-Id: b006ece1-a3fc-47c4-b0b5-80714b815fdc
	I0203 12:28:08.016725   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:08.016725   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:08.016784   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:08.016952   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:08.509256   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:08.509256   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:08.509256   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:08.509256   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:08.514291   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:08.514291   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:08.514291   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:08.514291   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:08 GMT
	I0203 12:28:08.514291   13136 round_trippers.go:580]     Audit-Id: a0db5066-8e01-4126-8b70-7049932981b3
	I0203 12:28:08.514291   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:08.514291   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:08.514291   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:08.514291   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:08.515156   13136 node_ready.go:53] node "multinode-749300" has status "Ready":"False"
	I0203 12:28:09.009559   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:09.009559   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:09.009625   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:09.009625   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:09.013726   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:09.013726   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:09.013726   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:09.013726   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:09.013726   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:09.013726   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:09 GMT
	I0203 12:28:09.013726   13136 round_trippers.go:580]     Audit-Id: 4105e73f-ae7c-4edb-bd90-50f9b5b24467
	I0203 12:28:09.013726   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:09.014033   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:09.508332   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:09.508332   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:09.508332   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:09.508332   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:09.512925   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:09.512925   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:09.512925   13136 round_trippers.go:580]     Audit-Id: f8f2ffd7-2556-485c-bd38-6664c2e84e5c
	I0203 12:28:09.512925   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:09.512925   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:09.512925   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:09.512925   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:09.512925   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:09 GMT
	I0203 12:28:09.512925   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1872","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5580 chars]
	I0203 12:28:10.008452   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:10.008452   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:10.008452   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:10.008452   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:10.016901   13136 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0203 12:28:10.016901   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:10.016901   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:10.016901   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:10.016983   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:10 GMT
	I0203 12:28:10.016983   13136 round_trippers.go:580]     Audit-Id: f0fb5700-49f5-4aa2-bd4f-461847d58a5a
	I0203 12:28:10.016983   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:10.016983   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:10.017151   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1914","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5453 chars]
	I0203 12:28:10.017151   13136 node_ready.go:49] node "multinode-749300" has status "Ready":"True"
	I0203 12:28:10.017151   13136 node_ready.go:38] duration metric: took 39.5089814s for node "multinode-749300" to be "Ready" ...
	I0203 12:28:10.017151   13136 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 12:28:10.017151   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods
	I0203 12:28:10.017151   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:10.017151   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:10.017151   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:10.033173   13136 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0203 12:28:10.033771   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:10.033771   13136 round_trippers.go:580]     Audit-Id: 578a2d3a-8189-4eeb-b517-94366c7e6b76
	I0203 12:28:10.033771   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:10.033771   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:10.033771   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:10.033771   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:10.033771   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:10 GMT
	I0203 12:28:10.036124   13136 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1914"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90298 chars]
	I0203 12:28:10.039985   13136 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:10.040142   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:10.040142   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:10.040142   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:10.040142   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:10.051703   13136 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0203 12:28:10.051703   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:10.051703   13136 round_trippers.go:580]     Audit-Id: 18fc8a36-48a8-4a16-9d76-5bc300577d64
	I0203 12:28:10.051703   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:10.051703   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:10.051703   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:10.051703   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:10.051703   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:10 GMT
	I0203 12:28:10.051703   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:10.052411   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:10.052411   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:10.052411   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:10.052411   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:10.057066   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:10.057066   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:10.057066   13136 round_trippers.go:580]     Audit-Id: 7b49eeac-6ef1-4d6f-9637-6a15d104e5d2
	I0203 12:28:10.057066   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:10.057066   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:10.057066   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:10.057066   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:10.057066   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:10 GMT
	I0203 12:28:10.057066   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1915","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0203 12:28:10.541122   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:10.541122   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:10.541122   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:10.541122   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:10.545725   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:10.545725   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:10.545725   13136 round_trippers.go:580]     Audit-Id: eef1aa6c-b8e6-4ece-a267-abc65db4c707
	I0203 12:28:10.545725   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:10.545725   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:10.545725   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:10.545860   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:10.545860   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:10 GMT
	I0203 12:28:10.545993   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:10.546835   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:10.546835   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:10.546894   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:10.546894   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:10.550028   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:10.550100   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:10.550100   13136 round_trippers.go:580]     Audit-Id: abfcf48b-95ae-4b09-bded-b9ff118139c1
	I0203 12:28:10.550100   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:10.550100   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:10.550100   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:10.550100   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:10.550100   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:10 GMT
	I0203 12:28:10.550326   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1915","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0203 12:28:11.040666   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:11.040666   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:11.040666   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:11.040666   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:11.046640   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:11.046640   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:11.046640   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:11.046640   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:11.046640   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:11.046640   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:11 GMT
	I0203 12:28:11.046640   13136 round_trippers.go:580]     Audit-Id: 5977d234-6a07-4214-b5b4-a72ff0160ab4
	I0203 12:28:11.046640   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:11.046640   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:11.046640   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:11.046640   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:11.046640   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:11.046640   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:11.050664   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:11.051265   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:11.051265   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:11.051339   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:11 GMT
	I0203 12:28:11.051339   13136 round_trippers.go:580]     Audit-Id: 73dce08f-6c23-4f2b-98ef-8f9f3a58b585
	I0203 12:28:11.051339   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:11.051339   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:11.051339   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:11.051597   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1915","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0203 12:28:11.540218   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:11.540218   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:11.540218   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:11.540218   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:11.545099   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:11.545099   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:11.545099   13136 round_trippers.go:580]     Audit-Id: 2591d6f2-092e-4c5d-921a-47e71177e964
	I0203 12:28:11.545099   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:11.545099   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:11.545099   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:11.545099   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:11.545099   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:11 GMT
	I0203 12:28:11.545301   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:11.546115   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:11.546115   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:11.546208   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:11.546208   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:11.551295   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:11.551295   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:11.551295   13136 round_trippers.go:580]     Audit-Id: 9d922085-bb5d-4e88-93a3-0bd4313b3b7a
	I0203 12:28:11.551295   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:11.551295   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:11.551295   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:11.551295   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:11.551295   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:11 GMT
	I0203 12:28:11.551999   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1915","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0203 12:28:12.040749   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:12.040749   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:12.040749   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:12.040749   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:12.045362   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:12.045460   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:12.045460   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:12.045460   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:12.045460   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:12.045460   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:12 GMT
	I0203 12:28:12.045544   13136 round_trippers.go:580]     Audit-Id: 38b72ff7-0e27-4254-b600-43fa6e99f48e
	I0203 12:28:12.045544   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:12.045637   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:12.046333   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:12.046333   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:12.046408   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:12.046408   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:12.049624   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:12.049624   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:12.049624   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:12.049624   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:12.049624   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:12.049624   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:12.049624   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:12 GMT
	I0203 12:28:12.049624   13136 round_trippers.go:580]     Audit-Id: fcf99cde-84e0-4a84-90f7-7eea4754a2f4
	I0203 12:28:12.050750   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1915","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0203 12:28:12.050993   13136 pod_ready.go:103] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"False"
	I0203 12:28:12.541137   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:12.541137   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:12.541137   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:12.541137   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:12.545199   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:12.545199   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:12.545315   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:12 GMT
	I0203 12:28:12.545315   13136 round_trippers.go:580]     Audit-Id: 5589ede2-d2a7-4fe2-9280-63e2910827de
	I0203 12:28:12.545315   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:12.545315   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:12.545315   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:12.545315   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:12.545686   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:12.546465   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:12.546465   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:12.546465   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:12.546465   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:12.549370   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:28:12.549446   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:12.549446   13136 round_trippers.go:580]     Audit-Id: a268ddb5-0b4b-439d-a1e2-1f0479dce27f
	I0203 12:28:12.549512   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:12.549512   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:12.549512   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:12.549512   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:12.549512   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:12 GMT
	I0203 12:28:12.549609   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1915","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0203 12:28:13.041165   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:13.041165   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:13.041165   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:13.041165   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:13.045069   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:13.045069   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:13.045069   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:13 GMT
	I0203 12:28:13.045069   13136 round_trippers.go:580]     Audit-Id: aed1876c-6afc-4567-9c94-9fb50cdb0899
	I0203 12:28:13.045169   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:13.045169   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:13.045169   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:13.045169   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:13.045537   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:13.046332   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:13.046410   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:13.046410   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:13.046410   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:13.052370   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:13.052370   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:13.052370   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:13 GMT
	I0203 12:28:13.052370   13136 round_trippers.go:580]     Audit-Id: 5e772ad2-b269-43aa-b341-2df237d7687a
	I0203 12:28:13.052370   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:13.052370   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:13.052370   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:13.052370   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:13.052370   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1915","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I0203 12:28:13.541681   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:13.541681   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:13.541681   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:13.541681   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:13.546472   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:13.546472   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:13.546472   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:13.546472   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:13.546472   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:13 GMT
	I0203 12:28:13.546472   13136 round_trippers.go:580]     Audit-Id: b8ce2091-75b5-437c-acaf-7c5a90a4052c
	I0203 12:28:13.546472   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:13.546472   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:13.546472   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:13.547144   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:13.547144   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:13.547750   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:13.547855   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:13.551646   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:13.551734   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:13.551734   13136 round_trippers.go:580]     Audit-Id: bcebd6b1-c269-4209-8354-155c6236e811
	I0203 12:28:13.551734   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:13.551734   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:13.551734   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:13.551734   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:13.551734   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:13 GMT
	I0203 12:28:13.552016   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:14.041243   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:14.041243   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:14.041243   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:14.041243   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:14.045676   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:14.045676   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:14.045676   13136 round_trippers.go:580]     Audit-Id: 5e278304-f037-458c-a7c2-34385dd97a3a
	I0203 12:28:14.045771   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:14.045771   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:14.045771   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:14.045771   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:14.045771   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:14 GMT
	I0203 12:28:14.045853   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:14.046628   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:14.046628   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:14.046628   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:14.046628   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:14.052731   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:28:14.052731   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:14.052731   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:14.052731   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:14.052731   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:14.052731   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:14 GMT
	I0203 12:28:14.052731   13136 round_trippers.go:580]     Audit-Id: a0527d73-57c3-40f0-bc56-60c7515c736f
	I0203 12:28:14.052731   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:14.052731   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:14.053511   13136 pod_ready.go:103] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"False"
	I0203 12:28:14.541564   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:14.541650   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:14.541650   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:14.541650   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:14.545866   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:14.545866   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:14.545866   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:14.545866   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:14.545866   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:14.545866   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:14.545866   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:14 GMT
	I0203 12:28:14.545866   13136 round_trippers.go:580]     Audit-Id: f67eccfb-e648-4ab3-bed7-428a6eb02617
	I0203 12:28:14.545866   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:14.546902   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:14.546967   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:14.546967   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:14.546967   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:14.549662   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:28:14.549662   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:14.549662   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:14.549662   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:14 GMT
	I0203 12:28:14.549662   13136 round_trippers.go:580]     Audit-Id: 479c102c-a461-45ac-a960-ca3a65f55337
	I0203 12:28:14.549662   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:14.549662   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:14.549662   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:14.549937   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:15.040321   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:15.040321   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:15.040321   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:15.040321   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:15.044873   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:15.044970   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:15.044970   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:15.044970   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:15.044970   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:15.044970   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:15 GMT
	I0203 12:28:15.045050   13136 round_trippers.go:580]     Audit-Id: 278cd36f-737d-439c-9196-c4bc9859a2d4
	I0203 12:28:15.045076   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:15.045285   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:15.046138   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:15.046138   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:15.046138   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:15.046201   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:15.048911   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:28:15.048911   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:15.048911   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:15.048911   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:15.048911   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:15.049430   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:15 GMT
	I0203 12:28:15.049430   13136 round_trippers.go:580]     Audit-Id: 875d38ec-68c3-429d-9b30-69ecc9185cfe
	I0203 12:28:15.049430   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:15.049605   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:15.540721   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:15.540721   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:15.540721   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:15.540721   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:15.544069   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:15.544153   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:15.544153   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:15 GMT
	I0203 12:28:15.544153   13136 round_trippers.go:580]     Audit-Id: bcd97cc3-9ad0-47f9-89bc-14f64eb4c1c0
	I0203 12:28:15.544153   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:15.544153   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:15.544153   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:15.544153   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:15.544392   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:15.544724   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:15.544724   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:15.544724   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:15.544724   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:15.548033   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:15.548033   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:15.548243   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:15.548243   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:15.548243   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:15.548243   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:15.548243   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:15 GMT
	I0203 12:28:15.548243   13136 round_trippers.go:580]     Audit-Id: 5c1228d9-749f-442f-9654-6ce0a5fec451
	I0203 12:28:15.548530   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:16.042899   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:16.042973   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:16.042973   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:16.042973   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:16.046773   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:16.046835   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:16.046835   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:16.046835   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:16.046835   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:16.046835   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:16 GMT
	I0203 12:28:16.046835   13136 round_trippers.go:580]     Audit-Id: b1a26ab1-4137-438a-b921-1e93efd74aaa
	I0203 12:28:16.046835   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:16.046998   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:16.047833   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:16.047833   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:16.047909   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:16.047909   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:16.051891   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:16.051978   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:16.051978   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:16.051978   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:16.051978   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:16 GMT
	I0203 12:28:16.051978   13136 round_trippers.go:580]     Audit-Id: 391bac31-3be8-43a6-ade4-19f865d07a19
	I0203 12:28:16.051978   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:16.051978   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:16.052185   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:16.540864   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:16.541032   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:16.541032   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:16.541032   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:16.545804   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:16.545880   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:16.545880   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:16.545880   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:16.545880   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:16.545880   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:16 GMT
	I0203 12:28:16.545880   13136 round_trippers.go:580]     Audit-Id: fb52dfe4-6767-4322-b3b9-6f18d560609a
	I0203 12:28:16.545880   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:16.546041   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:16.546930   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:16.546930   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:16.546930   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:16.546930   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:16.550233   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:16.550233   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:16.550233   13136 round_trippers.go:580]     Audit-Id: 77268df5-8715-4d25-8c37-a10ab16cae48
	I0203 12:28:16.550787   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:16.550787   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:16.550787   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:16.550787   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:16.550787   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:16 GMT
	I0203 12:28:16.550963   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:16.551362   13136 pod_ready.go:103] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"False"
	I0203 12:28:17.040611   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:17.040611   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:17.040611   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:17.040611   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:17.045018   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:17.045103   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:17.045103   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:17.045103   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:17.045103   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:17 GMT
	I0203 12:28:17.045103   13136 round_trippers.go:580]     Audit-Id: e4e5bc4e-6b5d-4ee6-a052-845e400862bb
	I0203 12:28:17.045103   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:17.045103   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:17.045103   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:17.046263   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:17.046337   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:17.046337   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:17.046337   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:17.049411   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:17.049411   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:17.049411   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:17.049411   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:17.049411   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:17 GMT
	I0203 12:28:17.049411   13136 round_trippers.go:580]     Audit-Id: ecddf643-e2f3-45a0-a123-dffba90d6c81
	I0203 12:28:17.049411   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:17.049411   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:17.049411   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:17.541764   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:17.541764   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:17.541764   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:17.541764   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:17.546089   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:17.546089   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:17.546089   13136 round_trippers.go:580]     Audit-Id: a6922501-d2cd-4a6b-a190-4faa31cdc2b5
	I0203 12:28:17.546089   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:17.546089   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:17.546089   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:17.546089   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:17.546089   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:17 GMT
	I0203 12:28:17.546484   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:17.547122   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:17.547122   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:17.547122   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:17.547122   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:17.549400   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:28:17.550416   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:17.550416   13136 round_trippers.go:580]     Audit-Id: 2d030636-2e10-4ce2-8e40-299915aa0f09
	I0203 12:28:17.550416   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:17.550468   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:17.550468   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:17.550468   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:17.550468   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:17 GMT
	I0203 12:28:17.550673   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:18.040839   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:18.040839   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:18.040839   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:18.040839   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:18.045193   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:18.045193   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:18.045193   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:18 GMT
	I0203 12:28:18.045193   13136 round_trippers.go:580]     Audit-Id: d1d1abe1-c722-4074-8eb1-a8bb625b4322
	I0203 12:28:18.045193   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:18.045193   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:18.045193   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:18.045193   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:18.045193   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:18.046482   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:18.046482   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:18.046482   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:18.046482   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:18.053246   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:28:18.053246   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:18.053246   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:18.053246   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:18 GMT
	I0203 12:28:18.053246   13136 round_trippers.go:580]     Audit-Id: f52501e7-69b7-4159-9cb3-d67f75fc8eaf
	I0203 12:28:18.053246   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:18.053246   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:18.053246   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:18.053246   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:18.540600   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:18.540600   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:18.540600   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:18.540600   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:18.544982   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:18.545096   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:18.545096   13136 round_trippers.go:580]     Audit-Id: 2c0687a9-4575-4e6f-b713-e100a94e6b86
	I0203 12:28:18.545096   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:18.545096   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:18.545096   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:18.545096   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:18.545096   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:18 GMT
	I0203 12:28:18.545341   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:18.546103   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:18.546103   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:18.546103   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:18.546103   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:18.549508   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:18.549508   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:18.549508   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:18.549508   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:18.549508   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:18.549508   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:18 GMT
	I0203 12:28:18.549508   13136 round_trippers.go:580]     Audit-Id: 783707b3-d0da-4dfe-881a-66d0f6996fb8
	I0203 12:28:18.549508   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:18.549780   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:19.041163   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:19.041163   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:19.041163   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:19.041163   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:19.053526   13136 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0203 12:28:19.054607   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:19.054607   13136 round_trippers.go:580]     Audit-Id: 4aac503e-c093-44a6-94b5-807d166d2911
	I0203 12:28:19.054607   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:19.054607   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:19.054649   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:19.054649   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:19.054649   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:19 GMT
	I0203 12:28:19.054864   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:19.055645   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:19.055645   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:19.055722   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:19.055722   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:19.060355   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:19.060455   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:19.060455   13136 round_trippers.go:580]     Audit-Id: 2078f208-c2d9-4a6b-85e1-b14ef2e700de
	I0203 12:28:19.060455   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:19.060455   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:19.060455   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:19.060455   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:19.060455   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:19 GMT
	I0203 12:28:19.062159   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:19.062159   13136 pod_ready.go:103] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"False"
	I0203 12:28:19.540499   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:19.540499   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:19.540499   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:19.540499   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:19.544496   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:19.544569   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:19.544569   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:19 GMT
	I0203 12:28:19.544569   13136 round_trippers.go:580]     Audit-Id: 0ed79164-db5f-4c00-bcc2-16a65c377f23
	I0203 12:28:19.544569   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:19.544569   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:19.544569   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:19.544569   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:19.544569   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:19.545585   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:19.545585   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:19.545659   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:19.545659   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:19.548854   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:19.548854   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:19.549236   13136 round_trippers.go:580]     Audit-Id: 764f2081-fbdd-454f-a9ca-1639696960ce
	I0203 12:28:19.549236   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:19.549236   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:19.549236   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:19.549236   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:19.549236   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:19 GMT
	I0203 12:28:19.549492   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:20.040190   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:20.040190   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:20.040190   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:20.040190   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:20.045282   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:20.045282   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:20.045282   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:20 GMT
	I0203 12:28:20.045282   13136 round_trippers.go:580]     Audit-Id: e77de3d8-c28c-468c-b1de-5ee1c12431b7
	I0203 12:28:20.045282   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:20.045282   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:20.045282   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:20.045282   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:20.045510   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:20.046261   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:20.046261   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:20.046261   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:20.046261   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:20.052729   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:28:20.052729   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:20.052729   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:20 GMT
	I0203 12:28:20.052729   13136 round_trippers.go:580]     Audit-Id: b75bea0d-0f48-4f32-a8f2-84115b0930f2
	I0203 12:28:20.052729   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:20.052729   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:20.052729   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:20.052729   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:20.052729   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:20.540870   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:20.540956   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:20.540956   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:20.540956   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:20.544859   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:20.544959   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:20.544959   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:20.544959   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:20 GMT
	I0203 12:28:20.544959   13136 round_trippers.go:580]     Audit-Id: 26b4faed-1693-4877-85f3-c2a5660cfbbb
	I0203 12:28:20.545024   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:20.545024   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:20.545024   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:20.545212   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:20.545906   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:20.545968   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:20.545968   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:20.545968   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:20.548725   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:28:20.548725   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:20.548725   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:20.548725   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:20.548725   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:20.548725   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:20.548725   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:20 GMT
	I0203 12:28:20.548725   13136 round_trippers.go:580]     Audit-Id: 7ba94174-41f2-4298-bdfb-58c3689dd7cf
	I0203 12:28:20.548725   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:21.041105   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:21.041105   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:21.041105   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:21.041105   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:21.044704   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:21.045362   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:21.045362   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:21.045362   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:21.045362   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:21 GMT
	I0203 12:28:21.045362   13136 round_trippers.go:580]     Audit-Id: 5f75181c-1b8a-422a-b477-0acc7d466358
	I0203 12:28:21.045472   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:21.045472   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:21.045754   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:21.046551   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:21.046551   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:21.046629   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:21.046629   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:21.049992   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:21.050054   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:21.050054   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:21.050054   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:21.050118   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:21.050118   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:21.050118   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:21 GMT
	I0203 12:28:21.050118   13136 round_trippers.go:580]     Audit-Id: 5afbc021-fb0a-4ba8-afc5-03929647065c
	I0203 12:28:21.050389   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:21.541160   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:21.541160   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:21.541160   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:21.541160   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:21.545378   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:21.545378   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:21.545378   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:21.545378   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:21.545378   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:21 GMT
	I0203 12:28:21.545378   13136 round_trippers.go:580]     Audit-Id: c997fa66-d716-46e4-84ac-0eef6bf2319b
	I0203 12:28:21.545378   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:21.545378   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:21.545378   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:21.546487   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:21.546560   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:21.546560   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:21.546560   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:21.549971   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:21.549971   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:21.549971   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:21.549971   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:21.549971   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:21.549971   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:21 GMT
	I0203 12:28:21.549971   13136 round_trippers.go:580]     Audit-Id: 49184a45-d677-4ef3-9aee-e5173a0cf69b
	I0203 12:28:21.549971   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:21.549971   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:21.550624   13136 pod_ready.go:103] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"False"
	I0203 12:28:22.040778   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:22.040778   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:22.040778   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:22.040778   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:22.045427   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:22.045427   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:22.045427   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:22 GMT
	I0203 12:28:22.045427   13136 round_trippers.go:580]     Audit-Id: 0b46e9db-563f-4db0-9a8a-f445b0a97553
	I0203 12:28:22.045427   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:22.045427   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:22.045427   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:22.045427   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:22.045754   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:22.046470   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:22.046539   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:22.046539   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:22.046539   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:22.052327   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:22.052327   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:22.052327   13136 round_trippers.go:580]     Audit-Id: 29856910-9e37-4971-8d4a-cb80fa46082c
	I0203 12:28:22.052327   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:22.052327   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:22.052327   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:22.052327   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:22.052327   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:22 GMT
	I0203 12:28:22.052327   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:22.541584   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:22.541683   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:22.541683   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:22.541683   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:22.546220   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:22.546220   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:22.546220   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:22.546220   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:22.546220   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:22 GMT
	I0203 12:28:22.546220   13136 round_trippers.go:580]     Audit-Id: 58f71b24-d31e-4ca8-86a9-77e4d771a8fb
	I0203 12:28:22.546220   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:22.546220   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:22.546220   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:22.547364   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:22.547364   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:22.547364   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:22.547443   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:22.550518   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:22.550518   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:22.550518   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:22.550518   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:22.550518   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:22 GMT
	I0203 12:28:22.550518   13136 round_trippers.go:580]     Audit-Id: 6cb59054-21ec-4e1a-a58c-0629e260d7da
	I0203 12:28:22.550518   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:22.550518   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:22.550760   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:23.040538   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:23.040538   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:23.040538   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:23.040538   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:23.045302   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:23.045302   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:23.045302   13136 round_trippers.go:580]     Audit-Id: b9532050-4db0-4aab-89d2-b13890a9ce6f
	I0203 12:28:23.045397   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:23.045397   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:23.045397   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:23.045397   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:23.045397   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:23 GMT
	I0203 12:28:23.045602   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:23.046356   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:23.046356   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:23.046427   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:23.046427   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:23.049751   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:23.049842   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:23.049842   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:23.049842   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:23.049842   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:23.049842   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:23.049908   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:23 GMT
	I0203 12:28:23.049908   13136 round_trippers.go:580]     Audit-Id: fe40e04a-71a4-486d-a16f-82f6db65bd08
	I0203 12:28:23.050032   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:23.540432   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:23.540432   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:23.540432   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:23.540432   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:23.545089   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:23.545197   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:23.545197   13136 round_trippers.go:580]     Audit-Id: b1ca1781-a1cf-4772-b642-d07702dd8dac
	I0203 12:28:23.545197   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:23.545197   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:23.545197   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:23.545197   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:23.545197   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:23 GMT
	I0203 12:28:23.545482   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:23.546263   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:23.546337   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:23.546337   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:23.546337   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:23.549265   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:28:23.549265   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:23.549265   13136 round_trippers.go:580]     Audit-Id: 8435724e-68b5-4416-a75e-093953587d5a
	I0203 12:28:23.549265   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:23.549265   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:23.549265   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:23.549265   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:23.549265   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:23 GMT
	I0203 12:28:23.549465   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:24.041290   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:24.041290   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:24.041362   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:24.041362   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:24.045846   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:24.045846   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:24.045846   13136 round_trippers.go:580]     Audit-Id: 24c17998-e5de-4588-bb6e-9cc7203809b4
	I0203 12:28:24.045846   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:24.045846   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:24.045846   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:24.045846   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:24.045846   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:24 GMT
	I0203 12:28:24.045846   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:24.046967   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:24.046967   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:24.047034   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:24.047034   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:24.050068   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:24.050068   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:24.050068   13136 round_trippers.go:580]     Audit-Id: 53447755-5f83-455d-8a8a-f1a96b79bd22
	I0203 12:28:24.050068   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:24.050068   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:24.050068   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:24.050068   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:24.050068   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:24 GMT
	I0203 12:28:24.050354   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:24.050498   13136 pod_ready.go:103] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"False"
	I0203 12:28:24.541547   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:24.541547   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:24.541769   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:24.541769   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:24.548539   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:28:24.548539   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:24.548539   13136 round_trippers.go:580]     Audit-Id: 71e0bc1c-3925-4f8a-a728-c2b0b162b1b6
	I0203 12:28:24.548539   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:24.548539   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:24.548539   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:24.548539   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:24.548539   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:24 GMT
	I0203 12:28:24.548539   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:24.550624   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:24.550624   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:24.550624   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:24.550687   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:24.553621   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:28:24.553704   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:24.553704   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:24.553704   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:24.553704   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:24.553704   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:24.553777   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:24 GMT
	I0203 12:28:24.553777   13136 round_trippers.go:580]     Audit-Id: e540aec6-22e9-4629-a09f-e6360e52b561
	I0203 12:28:24.553902   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:25.040909   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:25.040909   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:25.040909   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:25.040909   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:25.044580   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:25.045263   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:25.045263   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:25.045263   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:25.045263   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:25 GMT
	I0203 12:28:25.045263   13136 round_trippers.go:580]     Audit-Id: d03f7ca4-7aef-4d2b-bc93-7493424fcf07
	I0203 12:28:25.045263   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:25.045263   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:25.045424   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:25.046162   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:25.046267   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:25.046267   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:25.046267   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:25.049318   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:25.049318   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:25.049318   13136 round_trippers.go:580]     Audit-Id: c52fde3a-2f7d-44c3-8f1d-27257dfd3e25
	I0203 12:28:25.049318   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:25.049318   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:25.049318   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:25.049318   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:25.049318   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:25 GMT
	I0203 12:28:25.049593   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:25.540423   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:25.540423   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:25.540423   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:25.540423   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:25.547830   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:28:25.547830   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:25.547830   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:25.547830   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:25.547830   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:25 GMT
	I0203 12:28:25.547830   13136 round_trippers.go:580]     Audit-Id: 38be7af3-52d8-4586-bc8b-0c1899124850
	I0203 12:28:25.547830   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:25.547830   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:25.547830   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:25.548800   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:25.548800   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:25.548800   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:25.548800   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:25.551806   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:25.551806   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:25.551806   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:25 GMT
	I0203 12:28:25.551806   13136 round_trippers.go:580]     Audit-Id: 8af2c4dd-a45c-4857-9907-5fa412fb17be
	I0203 12:28:25.551806   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:25.551806   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:25.551806   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:25.551806   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:25.551806   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:26.041850   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:26.042217   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:26.042217   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:26.042217   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:26.046311   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:26.046383   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:26.046383   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:26.046383   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:26.046446   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:26.046446   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:26.046446   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:26 GMT
	I0203 12:28:26.046446   13136 round_trippers.go:580]     Audit-Id: 9bddf111-6cd8-495e-a423-fe83de63ff2f
	I0203 12:28:26.046667   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:26.047647   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:26.047647   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:26.047647   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:26.047647   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:26.054419   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:28:26.054419   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:26.054419   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:26.054419   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:26.054419   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:26 GMT
	I0203 12:28:26.054419   13136 round_trippers.go:580]     Audit-Id: 00380662-796a-4b6a-b2da-be31e8897d04
	I0203 12:28:26.054419   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:26.054733   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:26.055236   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:26.055236   13136 pod_ready.go:103] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"False"
	I0203 12:28:26.542065   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:26.542065   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:26.542155   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:26.542155   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:26.545944   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:26.545944   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:26.546013   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:26 GMT
	I0203 12:28:26.546013   13136 round_trippers.go:580]     Audit-Id: 68c01edc-96e4-489d-8ca8-d80c71cc8695
	I0203 12:28:26.546013   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:26.546013   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:26.546013   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:26.546013   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:26.546013   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:26.546610   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:26.546610   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:26.547132   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:26.547132   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:26.549850   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:28:26.550237   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:26.550237   13136 round_trippers.go:580]     Audit-Id: f46a0bc6-ea68-4924-ae7d-a4660c473bbb
	I0203 12:28:26.550237   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:26.550237   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:26.550237   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:26.550237   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:26.550237   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:26 GMT
	I0203 12:28:26.550532   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:27.041031   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:27.041031   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:27.041031   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:27.041031   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:27.046151   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:27.046151   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:27.046151   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:27.046151   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:27.046151   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:27.046151   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:27 GMT
	I0203 12:28:27.046151   13136 round_trippers.go:580]     Audit-Id: 80e1590c-2df2-465c-8b41-ed40274c71bb
	I0203 12:28:27.046151   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:27.046151   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:27.047674   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:27.047674   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:27.047674   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:27.047674   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:27.050901   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:27.050993   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:27.050993   13136 round_trippers.go:580]     Audit-Id: a169fb66-60a6-481b-a733-503aef41116c
	I0203 12:28:27.050993   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:27.050993   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:27.050993   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:27.050993   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:27.050993   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:27 GMT
	I0203 12:28:27.050993   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:27.540672   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:27.540672   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:27.540672   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:27.540672   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:27.544670   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:27.544670   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:27.544670   13136 round_trippers.go:580]     Audit-Id: 43823f7a-d3b8-4bf2-8cd0-e504795bc4fc
	I0203 12:28:27.544670   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:27.544670   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:27.544670   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:27.544670   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:27.544670   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:27 GMT
	I0203 12:28:27.544900   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:27.545738   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:27.545738   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:27.545738   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:27.545738   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:27.551454   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:27.551454   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:27.551454   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:27.551589   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:27.551589   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:27.551589   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:27 GMT
	I0203 12:28:27.551589   13136 round_trippers.go:580]     Audit-Id: 0c7ef70d-e7cc-4897-9105-f27a3c8a8989
	I0203 12:28:27.551589   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:27.551751   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:28.041494   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:28.041494   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:28.041494   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:28.041494   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:28.046423   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:28.046562   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:28.046562   13136 round_trippers.go:580]     Audit-Id: 4f68b248-d82c-4903-b801-08fe7d71b01c
	I0203 12:28:28.046562   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:28.046562   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:28.046562   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:28.046562   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:28.046562   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:28 GMT
	I0203 12:28:28.046896   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:28.047558   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:28.047622   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:28.047622   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:28.047622   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:28.053727   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:28:28.053727   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:28.053727   13136 round_trippers.go:580]     Audit-Id: 6d9412c0-cc6b-4a99-a079-f67562e8ece2
	I0203 12:28:28.053727   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:28.053727   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:28.053727   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:28.053727   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:28.053727   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:28 GMT
	I0203 12:28:28.053727   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:28.055343   13136 pod_ready.go:103] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"False"
	I0203 12:28:28.541204   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:28.541204   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:28.541204   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:28.541204   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:28.547050   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:28.547050   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:28.547050   13136 round_trippers.go:580]     Audit-Id: 0e7477c7-253d-429e-8480-b7d36eade537
	I0203 12:28:28.547050   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:28.547050   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:28.547150   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:28.547150   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:28.547150   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:28 GMT
	I0203 12:28:28.547353   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:28.548212   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:28.548212   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:28.548212   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:28.548212   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:28.554394   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:28.554394   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:28.554460   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:28.554460   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:28 GMT
	I0203 12:28:28.554460   13136 round_trippers.go:580]     Audit-Id: 07157447-09e8-4f06-bb37-d45f3f32fd1f
	I0203 12:28:28.554460   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:28.554460   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:28.554460   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:28.554644   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:29.040539   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:29.040539   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:29.040539   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:29.040539   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:29.044831   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:29.044831   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:29.044831   13136 round_trippers.go:580]     Audit-Id: bc4f0848-3d32-4b88-8ad4-f5c48561a259
	I0203 12:28:29.044831   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:29.044831   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:29.044831   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:29.044831   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:29.044831   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:29 GMT
	I0203 12:28:29.044831   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:29.045529   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:29.045529   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:29.045529   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:29.045529   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:29.052517   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:28:29.052609   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:29.052632   13136 round_trippers.go:580]     Audit-Id: 4e21d5f9-f741-4802-ae6e-f3674148bfb6
	I0203 12:28:29.052632   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:29.052632   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:29.052632   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:29.052632   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:29.052632   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:29 GMT
	I0203 12:28:29.052632   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:29.540880   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:29.540880   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:29.540880   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:29.540880   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:29.545684   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:29.545684   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:29.545824   13136 round_trippers.go:580]     Audit-Id: 8e746664-3bcd-43fc-b242-e4f1eab10540
	I0203 12:28:29.545824   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:29.545824   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:29.545824   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:29.545824   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:29.545824   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:29 GMT
	I0203 12:28:29.546026   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:29.546759   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:29.546759   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:29.546759   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:29.546759   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:29.551570   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:29.551570   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:29.551570   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:29.551570   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:29.551570   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:29 GMT
	I0203 12:28:29.551570   13136 round_trippers.go:580]     Audit-Id: 5846d48e-e7fb-4e41-a9ee-497091196550
	I0203 12:28:29.551570   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:29.551570   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:29.552543   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:30.040945   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:30.040945   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:30.040945   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:30.040945   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:30.045596   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:30.045724   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:30.045724   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:30.045724   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:30.045724   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:30 GMT
	I0203 12:28:30.045724   13136 round_trippers.go:580]     Audit-Id: 7ed00026-8a97-4216-b1e6-13905f28a2eb
	I0203 12:28:30.045724   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:30.045724   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:30.045869   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:30.046731   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:30.046792   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:30.046792   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:30.046792   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:30.051711   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:30.051711   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:30.051711   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:30 GMT
	I0203 12:28:30.051711   13136 round_trippers.go:580]     Audit-Id: 7a2db867-af9b-4c02-9722-a611eb83285f
	I0203 12:28:30.051711   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:30.051711   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:30.051711   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:30.051711   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:30.052025   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:30.541207   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:30.541207   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:30.541207   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:30.541207   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:30.545135   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:30.545135   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:30.545135   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:30.545249   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:30.545249   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:30.545249   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:30.545249   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:30 GMT
	I0203 12:28:30.545249   13136 round_trippers.go:580]     Audit-Id: 14c7e090-7216-44be-8e6c-da4f9cefa4ae
	I0203 12:28:30.545411   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:30.546100   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:30.546100   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:30.546100   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:30.546100   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:30.561611   13136 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0203 12:28:30.561611   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:30.561611   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:30.561682   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:30.561682   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:30 GMT
	I0203 12:28:30.561682   13136 round_trippers.go:580]     Audit-Id: df93a41a-4cbd-4cbf-aaf1-3ab1082d98c0
	I0203 12:28:30.561682   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:30.561682   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:30.561924   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:30.562353   13136 pod_ready.go:103] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"False"
	I0203 12:28:31.041246   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:31.041246   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.041246   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.041246   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.046328   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:31.046451   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.046451   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.046451   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.046451   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.046451   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.046451   13136 round_trippers.go:580]     Audit-Id: 6e469852-daa0-44d0-8fa7-52eeaf583d0c
	I0203 12:28:31.046535   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.046695   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1805","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0203 12:28:31.047444   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:31.047444   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.047444   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.047444   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.053049   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:31.053049   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.053049   13136 round_trippers.go:580]     Audit-Id: ed3c9a43-46af-45ec-bf35-e77cc27ad430
	I0203 12:28:31.053049   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.053049   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.053049   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.053049   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.053049   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.053049   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:31.541658   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:28:31.541658   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.541658   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.541658   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.561658   13136 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0203 12:28:31.561765   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.561765   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.561765   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.561765   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.561765   13136 round_trippers.go:580]     Audit-Id: 05e8bc17-d4fe-4490-b7d8-aed474b4d067
	I0203 12:28:31.561765   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.561765   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.561765   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1962","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7044 chars]
	I0203 12:28:31.562782   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:31.562782   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.562782   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.562782   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.571916   13136 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0203 12:28:31.571999   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.571999   13136 round_trippers.go:580]     Audit-Id: 283c7627-962f-4571-9be4-84291dc99169
	I0203 12:28:31.571999   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.571999   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.571999   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.571999   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.571999   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.572189   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:31.572591   13136 pod_ready.go:93] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"True"
	I0203 12:28:31.572661   13136 pod_ready.go:82] duration metric: took 21.5324352s for pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:31.572661   13136 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:31.572661   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-749300
	I0203 12:28:31.572806   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.572806   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.572854   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.598672   13136 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0203 12:28:31.598672   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.598672   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.598672   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.598779   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.598779   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.598779   13136 round_trippers.go:580]     Audit-Id: 573796a6-41ab-40ae-a42f-ff02650f9572
	I0203 12:28:31.598779   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.601089   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-749300","namespace":"kube-system","uid":"a956084b-f454-4ef5-8fed-7c189cb74ab0","resourceVersion":"1876","creationTimestamp":"2025-02-03T12:27:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.12.244:2379","kubernetes.io/config.hash":"f85eb916773a482447e41aa40aaff233","kubernetes.io/config.mirror":"f85eb916773a482447e41aa40aaff233","kubernetes.io/config.seen":"2025-02-03T12:27:19.750780815Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:27:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6606 chars]
	I0203 12:28:31.601089   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:31.601089   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.601089   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.601089   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.605966   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:31.605966   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.605966   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.605966   13136 round_trippers.go:580]     Audit-Id: 67d7be9d-84ce-4bcf-8912-b746a247e527
	I0203 12:28:31.605966   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.605966   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.605966   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.605966   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.605966   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:31.606969   13136 pod_ready.go:93] pod "etcd-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:28:31.606969   13136 pod_ready.go:82] duration metric: took 34.3069ms for pod "etcd-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:31.606969   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:31.606969   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-749300
	I0203 12:28:31.606969   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.606969   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.606969   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.617178   13136 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0203 12:28:31.617178   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.617178   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.617178   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.617178   13136 round_trippers.go:580]     Audit-Id: a27a8639-956c-4b3f-b490-54fcfff8f4fc
	I0203 12:28:31.617178   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.617178   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.617178   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.617436   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-749300","namespace":"kube-system","uid":"72513861-07f4-4533-8f55-8b3cce215b4c","resourceVersion":"1856","creationTimestamp":"2025-02-03T12:27:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.12.244:8443","kubernetes.io/config.hash":"20275825c8d44051c01f8d920b297acd","kubernetes.io/config.mirror":"20275825c8d44051c01f8d920b297acd","kubernetes.io/config.seen":"2025-02-03T12:27:19.750137111Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:27:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8039 chars]
	I0203 12:28:31.617622   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:31.617622   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.617622   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.617622   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.622274   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:31.622274   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.622274   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.622274   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.622274   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.622274   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.622274   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.622274   13136 round_trippers.go:580]     Audit-Id: a15c1bfc-823c-4b32-bbe7-30d292318a28
	I0203 12:28:31.622484   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:31.622934   13136 pod_ready.go:93] pod "kube-apiserver-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:28:31.622934   13136 pod_ready.go:82] duration metric: took 15.9656ms for pod "kube-apiserver-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:31.622986   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:31.623052   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-749300
	I0203 12:28:31.623110   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.623110   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.623110   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.625111   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:28:31.625111   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.625111   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.625111   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.625111   13136 round_trippers.go:580]     Audit-Id: 6b4b99f0-968a-4a8a-b3bc-fda4c02702e5
	I0203 12:28:31.625111   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.625111   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.625111   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.625111   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-749300","namespace":"kube-system","uid":"63c0818c-a0e6-40d1-b0c4-1cd633c91afb","resourceVersion":"1874","creationTimestamp":"2025-02-03T12:04:55Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c25845f184856fc216b76acafcf34ee9","kubernetes.io/config.mirror":"c25845f184856fc216b76acafcf34ee9","kubernetes.io/config.seen":"2025-02-03T12:04:55.455020645Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:04:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0203 12:28:31.626252   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:31.626354   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.626354   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.626354   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.629417   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:31.629417   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.629417   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.629417   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.629417   13136 round_trippers.go:580]     Audit-Id: a46ba0bc-ca35-4a8a-aa06-7b13154e94f1
	I0203 12:28:31.629417   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.629529   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.629529   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.629643   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:31.629990   13136 pod_ready.go:93] pod "kube-controller-manager-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:28:31.630057   13136 pod_ready.go:82] duration metric: took 7.0706ms for pod "kube-controller-manager-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:31.630057   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9g92t" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:31.630140   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g92t
	I0203 12:28:31.630140   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.630140   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.630204   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.635858   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:31.635858   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.635858   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.635858   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.635858   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.635858   13136 round_trippers.go:580]     Audit-Id: 5cd54abc-0f91-4c41-a973-f79f65739895
	I0203 12:28:31.635858   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.635858   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.636393   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9g92t","generateName":"kube-proxy-","namespace":"kube-system","uid":"1709b874-4fee-41f5-8d30-24912b2fa725","resourceVersion":"1844","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"04519c88-48ba-439f-bd57-a9c8b286d988","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04519c88-48ba-439f-bd57-a9c8b286d988\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6400 chars]
	I0203 12:28:31.637046   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:31.637117   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.637117   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.637117   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.638945   13136 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0203 12:28:31.638945   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.638945   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.638945   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.638945   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.638945   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.638945   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.638945   13136 round_trippers.go:580]     Audit-Id: fc4f78ce-c1ec-417f-9904-7c02501c5ed4
	I0203 12:28:31.639945   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:31.639945   13136 pod_ready.go:93] pod "kube-proxy-9g92t" in "kube-system" namespace has status "Ready":"True"
	I0203 12:28:31.639945   13136 pod_ready.go:82] duration metric: took 9.8881ms for pod "kube-proxy-9g92t" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:31.639945   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ggnq7" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:31.742759   13136 request.go:632] Waited for 102.8128ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggnq7
	I0203 12:28:31.743047   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggnq7
	I0203 12:28:31.743047   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.743047   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.743047   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.746547   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:31.746662   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.746662   13136 round_trippers.go:580]     Audit-Id: cc4e6c0e-add0-42fa-aa85-f37f000c5894
	I0203 12:28:31.746662   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.746662   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.746662   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.746662   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.746662   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.747165   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ggnq7","generateName":"kube-proxy-","namespace":"kube-system","uid":"63bc9e77-90e3-40c5-9b49-e95d2bfd7426","resourceVersion":"1930","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"04519c88-48ba-439f-bd57-a9c8b286d988","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04519c88-48ba-439f-bd57-a9c8b286d988\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6418 chars]
	I0203 12:28:31.942605   13136 request.go:632] Waited for 194.7608ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:28:31.942905   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:28:31.942905   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:31.942905   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:31.942905   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:31.947358   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:31.947487   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:31.947487   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:31.947487   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:31.947487   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:31.947487   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:31.947487   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:31 GMT
	I0203 12:28:31.947487   13136 round_trippers.go:580]     Audit-Id: 784c911b-32aa-4cdd-8b7c-197fb7ddb09f
	I0203 12:28:31.947666   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64","resourceVersion":"1941","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_07_57_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4581 chars]
	I0203 12:28:31.947666   13136 pod_ready.go:98] node "multinode-749300-m02" hosting pod "kube-proxy-ggnq7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300-m02" has status "Ready":"Unknown"
	I0203 12:28:31.947666   13136 pod_ready.go:82] duration metric: took 307.7175ms for pod "kube-proxy-ggnq7" in "kube-system" namespace to be "Ready" ...
	E0203 12:28:31.947666   13136 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-749300-m02" hosting pod "kube-proxy-ggnq7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300-m02" has status "Ready":"Unknown"
	I0203 12:28:31.947666   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w8wrd" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:32.141811   13136 request.go:632] Waited for 193.5919ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w8wrd
	I0203 12:28:32.141811   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w8wrd
	I0203 12:28:32.141811   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:32.141811   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:32.141811   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:32.147080   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:32.147080   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:32.147080   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:32.147080   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:32.147080   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:32.147080   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:32.147080   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:32 GMT
	I0203 12:28:32.147080   13136 round_trippers.go:580]     Audit-Id: 0ae124bf-2979-42be-97ac-1e26c8b29976
	I0203 12:28:32.147080   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-w8wrd","generateName":"kube-proxy-","namespace":"kube-system","uid":"f81878fa-528f-4bdf-90ec-83f54166370e","resourceVersion":"1727","creationTimestamp":"2025-02-03T12:12:30Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"04519c88-48ba-439f-bd57-a9c8b286d988","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:12:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04519c88-48ba-439f-bd57-a9c8b286d988\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6418 chars]
	I0203 12:28:32.341896   13136 request.go:632] Waited for 193.2518ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m03
	I0203 12:28:32.342213   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m03
	I0203 12:28:32.342213   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:32.342213   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:32.342213   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:32.346635   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:28:32.346702   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:32.346702   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:32.346702   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:32.346702   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:32.346702   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:32 GMT
	I0203 12:28:32.346702   13136 round_trippers.go:580]     Audit-Id: 3eff7c93-2d6e-46bd-a958-4fd9539cec09
	I0203 12:28:32.346702   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:32.346982   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m03","uid":"1765fbe7-e04a-4337-8284-6152642b17de","resourceVersion":"1838","creationTimestamp":"2025-02-03T12:22:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_22_58_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:22:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4398 chars]
	I0203 12:28:32.347387   13136 pod_ready.go:98] node "multinode-749300-m03" hosting pod "kube-proxy-w8wrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300-m03" has status "Ready":"Unknown"
	I0203 12:28:32.347449   13136 pod_ready.go:82] duration metric: took 399.7785ms for pod "kube-proxy-w8wrd" in "kube-system" namespace to be "Ready" ...
	E0203 12:28:32.347449   13136 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-749300-m03" hosting pod "kube-proxy-w8wrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300-m03" has status "Ready":"Unknown"
	I0203 12:28:32.347449   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:32.542125   13136 request.go:632] Waited for 194.5194ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-749300
	I0203 12:28:32.542125   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-749300
	I0203 12:28:32.542125   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:32.542125   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:32.542125   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:32.546693   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:32.546693   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:32.546693   13136 round_trippers.go:580]     Audit-Id: 2a24baac-a3ee-4b48-a042-ebe7fe6b8e7a
	I0203 12:28:32.546693   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:32.546782   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:32.546782   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:32.546782   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:32.546782   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:32 GMT
	I0203 12:28:32.546943   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-749300","namespace":"kube-system","uid":"8e4c1052-9dca-466d-833b-eff318b977d7","resourceVersion":"1864","creationTimestamp":"2025-02-03T12:04:55Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a4dc8a8db691940bb17375ec22c0921e","kubernetes.io/config.mirror":"a4dc8a8db691940bb17375ec22c0921e","kubernetes.io/config.seen":"2025-02-03T12:04:55.455022345Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:04:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5563 chars]
	I0203 12:28:32.742517   13136 request.go:632] Waited for 195.1713ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:32.742517   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:28:32.742517   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:32.742517   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:32.742517   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:32.747535   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:32.747535   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:32.747535   13136 round_trippers.go:580]     Audit-Id: c5843651-ee5e-49ca-b2eb-51c8601ada71
	I0203 12:28:32.747535   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:32.747535   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:32.747535   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:32.747535   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:32.747535   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:32 GMT
	I0203 12:28:32.747535   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:28:32.748122   13136 pod_ready.go:93] pod "kube-scheduler-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:28:32.748122   13136 pod_ready.go:82] duration metric: took 400.596ms for pod "kube-scheduler-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:28:32.748122   13136 pod_ready.go:39] duration metric: took 22.7307157s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 12:28:32.748122   13136 api_server.go:52] waiting for apiserver process to appear ...
	I0203 12:28:32.755751   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 12:28:32.785041   13136 command_runner.go:130] > 6c19e0a0ba9c
	I0203 12:28:32.785041   13136 logs.go:282] 1 containers: [6c19e0a0ba9c]
	I0203 12:28:32.792964   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 12:28:32.821550   13136 command_runner.go:130] > 09707a862965
	I0203 12:28:32.821550   13136 logs.go:282] 1 containers: [09707a862965]
	I0203 12:28:32.829459   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 12:28:32.853753   13136 command_runner.go:130] > edb5f00f1042
	I0203 12:28:32.853753   13136 command_runner.go:130] > fe91a8d012ae
	I0203 12:28:32.853753   13136 logs.go:282] 2 containers: [edb5f00f1042 fe91a8d012ae]
	I0203 12:28:32.861445   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 12:28:32.884026   13136 command_runner.go:130] > 2e43c2ecb4a9
	I0203 12:28:32.884838   13136 command_runner.go:130] > 88c40ca9aa3c
	I0203 12:28:32.884838   13136 logs.go:282] 2 containers: [2e43c2ecb4a9 88c40ca9aa3c]
	I0203 12:28:32.895690   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 12:28:32.921034   13136 command_runner.go:130] > cf33452e7244
	I0203 12:28:32.921034   13136 command_runner.go:130] > c6dc514e98f6
	I0203 12:28:32.921034   13136 logs.go:282] 2 containers: [cf33452e7244 c6dc514e98f6]
	I0203 12:28:32.929105   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 12:28:32.957040   13136 command_runner.go:130] > fa5ab1df8985
	I0203 12:28:32.957099   13136 command_runner.go:130] > 8ade10c0fb09
	I0203 12:28:32.957208   13136 logs.go:282] 2 containers: [fa5ab1df8985 8ade10c0fb09]
	I0203 12:28:32.966192   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0203 12:28:32.998981   13136 command_runner.go:130] > 644890f5738e
	I0203 12:28:32.998981   13136 command_runner.go:130] > fab2d9be6b5c
	I0203 12:28:32.998981   13136 logs.go:282] 2 containers: [644890f5738e fab2d9be6b5c]
	I0203 12:28:33.000010   13136 logs.go:123] Gathering logs for kube-scheduler [2e43c2ecb4a9] ...
	I0203 12:28:33.000055   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e43c2ecb4a9"
	I0203 12:28:33.028303   13136 command_runner.go:130] ! I0203 12:27:23.141470       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:33.028429   13136 command_runner.go:130] ! W0203 12:27:24.897433       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0203 12:28:33.028487   13136 command_runner.go:130] ! W0203 12:27:24.897513       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:33.028542   13136 command_runner.go:130] ! W0203 12:27:24.897526       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0203 12:28:33.028542   13136 command_runner.go:130] ! W0203 12:27:24.897538       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0203 12:28:33.028595   13136 command_runner.go:130] ! I0203 12:27:25.033204       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0203 12:28:33.028675   13136 command_runner.go:130] ! I0203 12:27:25.033541       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:33.028709   13136 command_runner.go:130] ! I0203 12:27:25.041065       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0203 12:28:33.028762   13136 command_runner.go:130] ! I0203 12:27:25.044977       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:33.028823   13136 command_runner.go:130] ! I0203 12:27:25.045234       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 12:28:33.028879   13136 command_runner.go:130] ! I0203 12:27:25.045638       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:33.028879   13136 command_runner.go:130] ! I0203 12:27:25.146094       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:33.031822   13136 logs.go:123] Gathering logs for kube-controller-manager [8ade10c0fb09] ...
	I0203 12:28:33.031871   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ade10c0fb09"
	I0203 12:28:33.074302   13136 command_runner.go:130] ! I0203 12:04:50.328199       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:50.683234       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:50.683563       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:50.687907       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:50.687950       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:50.687972       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:50.687984       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.071680       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.072106       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.089226       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.089889       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.091177       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.113934       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.114137       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.114294       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.115111       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.143403       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.146241       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.146450       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.167456       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.168207       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.169697       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.170035       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0203 12:28:33.074354   13136 command_runner.go:130] ! I0203 12:04:55.172429       1 shared_informer.go:320] Caches are synced for tokens
	I0203 12:28:33.074899   13136 command_runner.go:130] ! W0203 12:04:55.207419       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0203 12:28:33.074899   13136 command_runner.go:130] ! I0203 12:04:55.220184       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0203 12:28:33.074899   13136 command_runner.go:130] ! I0203 12:04:55.220335       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0203 12:28:33.075004   13136 command_runner.go:130] ! I0203 12:04:55.220802       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0203 12:28:33.075004   13136 command_runner.go:130] ! I0203 12:04:55.220818       1 shared_informer.go:313] Waiting for caches to sync for node
	I0203 12:28:33.075004   13136 command_runner.go:130] ! I0203 12:04:55.236689       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0203 12:28:33.075004   13136 command_runner.go:130] ! I0203 12:04:55.236985       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0203 12:28:33.075004   13136 command_runner.go:130] ! I0203 12:04:55.237003       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0203 12:28:33.075004   13136 command_runner.go:130] ! I0203 12:04:55.260414       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0203 12:28:33.075004   13136 command_runner.go:130] ! I0203 12:04:55.260996       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0203 12:28:33.075135   13136 command_runner.go:130] ! I0203 12:04:55.261428       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0203 12:28:33.075135   13136 command_runner.go:130] ! I0203 12:04:55.289640       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0203 12:28:33.075135   13136 command_runner.go:130] ! I0203 12:04:55.289893       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0203 12:28:33.075135   13136 command_runner.go:130] ! I0203 12:04:55.290571       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0203 12:28:33.075135   13136 command_runner.go:130] ! I0203 12:04:55.290736       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0203 12:28:33.075135   13136 command_runner.go:130] ! I0203 12:04:55.314846       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0203 12:28:33.075256   13136 command_runner.go:130] ! I0203 12:04:55.315076       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0203 12:28:33.075256   13136 command_runner.go:130] ! I0203 12:04:55.315101       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0203 12:28:33.075256   13136 command_runner.go:130] ! I0203 12:04:55.319462       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0203 12:28:33.075256   13136 command_runner.go:130] ! I0203 12:04:55.319527       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0203 12:28:33.075339   13136 command_runner.go:130] ! I0203 12:04:55.319535       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0203 12:28:33.075339   13136 command_runner.go:130] ! I0203 12:04:55.319689       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0203 12:28:33.075339   13136 command_runner.go:130] ! I0203 12:04:55.319723       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0203 12:28:33.075339   13136 command_runner.go:130] ! I0203 12:04:55.319733       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0203 12:28:33.075339   13136 command_runner.go:130] ! I0203 12:04:55.446823       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0203 12:28:33.075422   13136 command_runner.go:130] ! I0203 12:04:55.446851       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0203 12:28:33.075422   13136 command_runner.go:130] ! I0203 12:04:55.446960       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0203 12:28:33.075502   13136 command_runner.go:130] ! I0203 12:04:55.446972       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0203 12:28:33.075502   13136 command_runner.go:130] ! I0203 12:04:55.579930       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0203 12:28:33.075502   13136 command_runner.go:130] ! I0203 12:04:55.580047       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0203 12:28:33.075502   13136 command_runner.go:130] ! I0203 12:04:55.580079       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0203 12:28:33.075502   13136 command_runner.go:130] ! I0203 12:04:55.730127       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0203 12:28:33.075582   13136 command_runner.go:130] ! I0203 12:04:55.730301       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0203 12:28:33.075582   13136 command_runner.go:130] ! I0203 12:04:55.730314       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0203 12:28:33.075582   13136 command_runner.go:130] ! I0203 12:04:55.889482       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0203 12:28:33.075662   13136 command_runner.go:130] ! I0203 12:04:55.889843       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0203 12:28:33.075662   13136 command_runner.go:130] ! I0203 12:04:55.889907       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0203 12:28:33.075662   13136 command_runner.go:130] ! I0203 12:04:56.030244       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0203 12:28:33.075662   13136 command_runner.go:130] ! I0203 12:04:56.030535       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0203 12:28:33.075745   13136 command_runner.go:130] ! I0203 12:04:56.030566       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0203 12:28:33.075745   13136 command_runner.go:130] ! I0203 12:04:56.182222       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0203 12:28:33.075745   13136 command_runner.go:130] ! I0203 12:04:56.183153       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0203 12:28:33.075745   13136 command_runner.go:130] ! I0203 12:04:56.183191       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0203 12:28:33.075824   13136 command_runner.go:130] ! I0203 12:04:56.226256       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0203 12:28:33.075824   13136 command_runner.go:130] ! I0203 12:04:56.226280       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0203 12:28:33.075824   13136 command_runner.go:130] ! I0203 12:04:56.226330       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0203 12:28:33.075903   13136 command_runner.go:130] ! I0203 12:04:56.226371       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0203 12:28:33.075903   13136 command_runner.go:130] ! I0203 12:04:56.226410       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0203 12:28:33.075903   13136 command_runner.go:130] ! I0203 12:04:56.382971       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0203 12:28:33.075903   13136 command_runner.go:130] ! I0203 12:04:56.383201       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0203 12:28:33.075981   13136 command_runner.go:130] ! I0203 12:04:56.383218       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0203 12:28:33.075981   13136 command_runner.go:130] ! I0203 12:04:56.687449       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0203 12:28:33.075981   13136 command_runner.go:130] ! I0203 12:04:56.687532       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0203 12:28:33.075981   13136 command_runner.go:130] ! I0203 12:04:56.687548       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0203 12:28:33.076064   13136 command_runner.go:130] ! I0203 12:04:56.832606       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0203 12:28:33.076064   13136 command_runner.go:130] ! I0203 12:04:56.832640       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0203 12:28:33.076064   13136 command_runner.go:130] ! I0203 12:04:56.832542       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0203 12:28:33.076064   13136 command_runner.go:130] ! I0203 12:04:56.984351       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0203 12:28:33.076064   13136 command_runner.go:130] ! I0203 12:04:56.984538       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0203 12:28:33.076143   13136 command_runner.go:130] ! I0203 12:04:56.984550       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0203 12:28:33.076143   13136 command_runner.go:130] ! I0203 12:04:57.130440       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0203 12:28:33.076143   13136 command_runner.go:130] ! I0203 12:04:57.131375       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0203 12:28:33.076143   13136 command_runner.go:130] ! I0203 12:04:57.131428       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0203 12:28:33.076224   13136 command_runner.go:130] ! I0203 12:04:57.284265       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:33.076224   13136 command_runner.go:130] ! I0203 12:04:57.284330       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:33.076224   13136 command_runner.go:130] ! I0203 12:04:57.284343       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0203 12:28:33.076302   13136 command_runner.go:130] ! I0203 12:04:57.431498       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0203 12:28:33.076302   13136 command_runner.go:130] ! I0203 12:04:57.431577       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0203 12:28:33.076302   13136 command_runner.go:130] ! I0203 12:04:57.432308       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0203 12:28:33.076302   13136 command_runner.go:130] ! I0203 12:04:57.580329       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0203 12:28:33.076386   13136 command_runner.go:130] ! I0203 12:04:57.580661       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0203 12:28:33.076386   13136 command_runner.go:130] ! I0203 12:04:57.580693       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0203 12:28:33.076386   13136 command_runner.go:130] ! I0203 12:04:57.730504       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0203 12:28:33.076465   13136 command_runner.go:130] ! I0203 12:04:57.730629       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0203 12:28:33.076465   13136 command_runner.go:130] ! I0203 12:04:57.730638       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0203 12:28:33.076465   13136 command_runner.go:130] ! I0203 12:04:57.730646       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0203 12:28:33.076542   13136 command_runner.go:130] ! I0203 12:04:57.730719       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0203 12:28:33.076542   13136 command_runner.go:130] ! I0203 12:04:57.730820       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0203 12:28:33.076542   13136 command_runner.go:130] ! I0203 12:04:57.880536       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0203 12:28:33.076542   13136 command_runner.go:130] ! I0203 12:04:57.880666       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0203 12:28:33.076626   13136 command_runner.go:130] ! I0203 12:04:57.881079       1 shared_informer.go:313] Waiting for caches to sync for job
	I0203 12:28:33.076626   13136 command_runner.go:130] ! I0203 12:04:58.186601       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0203 12:28:33.076626   13136 command_runner.go:130] ! I0203 12:04:58.186797       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0203 12:28:33.076626   13136 command_runner.go:130] ! I0203 12:04:58.187086       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0203 12:28:33.076706   13136 command_runner.go:130] ! W0203 12:04:58.187187       1 shared_informer.go:597] resyncPeriod 18h8m42.862796871s is smaller than resyncCheckPeriod 21h1m9.302357504s and the informer has already started. Changing it to 21h1m9.302357504s
	I0203 12:28:33.076706   13136 command_runner.go:130] ! I0203 12:04:58.187252       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0203 12:28:33.076706   13136 command_runner.go:130] ! I0203 12:04:58.187334       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0203 12:28:33.076789   13136 command_runner.go:130] ! I0203 12:04:58.187356       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0203 12:28:33.076789   13136 command_runner.go:130] ! I0203 12:04:58.187374       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0203 12:28:33.076789   13136 command_runner.go:130] ! I0203 12:04:58.187391       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0203 12:28:33.076869   13136 command_runner.go:130] ! I0203 12:04:58.187427       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0203 12:28:33.076869   13136 command_runner.go:130] ! I0203 12:04:58.187455       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0203 12:28:33.076869   13136 command_runner.go:130] ! W0203 12:04:58.187474       1 shared_informer.go:597] resyncPeriod 19h41m25.464232572s is smaller than resyncCheckPeriod 21h1m9.302357504s and the informer has already started. Changing it to 21h1m9.302357504s
	I0203 12:28:33.076869   13136 command_runner.go:130] ! I0203 12:04:58.187523       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0203 12:28:33.076952   13136 command_runner.go:130] ! I0203 12:04:58.187588       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0203 12:28:33.076952   13136 command_runner.go:130] ! I0203 12:04:58.187662       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0203 12:28:33.076952   13136 command_runner.go:130] ! I0203 12:04:58.187679       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0203 12:28:33.076952   13136 command_runner.go:130] ! I0203 12:04:58.187699       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0203 12:28:33.076952   13136 command_runner.go:130] ! I0203 12:04:58.187967       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0203 12:28:33.076952   13136 command_runner.go:130] ! I0203 12:04:58.188030       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0203 12:28:33.077141   13136 command_runner.go:130] ! I0203 12:04:58.188069       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0203 12:28:33.077141   13136 command_runner.go:130] ! I0203 12:04:58.188097       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0203 12:28:33.077189   13136 command_runner.go:130] ! I0203 12:04:58.188127       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0203 12:28:33.077189   13136 command_runner.go:130] ! I0203 12:04:58.188181       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0203 12:28:33.077239   13136 command_runner.go:130] ! I0203 12:04:58.188248       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0203 12:28:33.077239   13136 command_runner.go:130] ! I0203 12:04:58.188271       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:33.077276   13136 command_runner.go:130] ! I0203 12:04:58.188294       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0203 12:28:33.077292   13136 command_runner.go:130] ! I0203 12:04:58.434011       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0203 12:28:33.077292   13136 command_runner.go:130] ! I0203 12:04:58.434132       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0203 12:28:33.077364   13136 command_runner.go:130] ! I0203 12:04:58.434145       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0203 12:28:33.077364   13136 command_runner.go:130] ! I0203 12:04:58.476316       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0203 12:28:33.077364   13136 command_runner.go:130] ! I0203 12:04:58.478848       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0203 12:28:33.077364   13136 command_runner.go:130] ! I0203 12:04:58.478330       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0203 12:28:33.077364   13136 command_runner.go:130] ! I0203 12:04:58.478362       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:33.077448   13136 command_runner.go:130] ! I0203 12:04:58.478346       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0203 12:28:33.077448   13136 command_runner.go:130] ! I0203 12:04:58.479085       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0203 12:28:33.077448   13136 command_runner.go:130] ! I0203 12:04:58.478432       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0203 12:28:33.077528   13136 command_runner.go:130] ! I0203 12:04:58.479097       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0203 12:28:33.077528   13136 command_runner.go:130] ! I0203 12:04:58.478442       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:33.077528   13136 command_runner.go:130] ! I0203 12:04:58.478482       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0203 12:28:33.077610   13136 command_runner.go:130] ! I0203 12:04:58.479316       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:33.077610   13136 command_runner.go:130] ! I0203 12:04:58.478490       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:33.077610   13136 command_runner.go:130] ! I0203 12:04:58.478533       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:33.077610   13136 command_runner.go:130] ! I0203 12:04:58.630437       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0203 12:28:33.077689   13136 command_runner.go:130] ! I0203 12:04:58.630476       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0203 12:28:33.077689   13136 command_runner.go:130] ! I0203 12:04:58.630884       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0203 12:28:33.077689   13136 command_runner.go:130] ! I0203 12:04:58.630985       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0203 12:28:33.077689   13136 command_runner.go:130] ! I0203 12:04:58.825850       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0203 12:28:33.077689   13136 command_runner.go:130] ! I0203 12:04:58.826005       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0203 12:28:33.077775   13136 command_runner.go:130] ! I0203 12:04:59.025218       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0203 12:28:33.077775   13136 command_runner.go:130] ! I0203 12:04:59.025576       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0203 12:28:33.077775   13136 command_runner.go:130] ! I0203 12:04:59.025879       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0203 12:28:33.077775   13136 command_runner.go:130] ! I0203 12:04:59.026140       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0203 12:28:33.077847   13136 command_runner.go:130] ! I0203 12:04:59.076054       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0203 12:28:33.077847   13136 command_runner.go:130] ! I0203 12:04:59.076201       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0203 12:28:33.077847   13136 command_runner.go:130] ! I0203 12:04:59.229685       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0203 12:28:33.077847   13136 command_runner.go:130] ! I0203 12:04:59.229852       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0203 12:28:33.077928   13136 command_runner.go:130] ! I0203 12:04:59.384463       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0203 12:28:33.077928   13136 command_runner.go:130] ! I0203 12:04:59.384562       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0203 12:28:33.077928   13136 command_runner.go:130] ! I0203 12:04:59.384584       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0203 12:28:33.077928   13136 command_runner.go:130] ! I0203 12:04:59.384709       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0203 12:28:33.078011   13136 command_runner.go:130] ! I0203 12:04:59.384734       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0203 12:28:33.078011   13136 command_runner.go:130] ! I0203 12:04:59.531643       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0203 12:28:33.078011   13136 command_runner.go:130] ! I0203 12:04:59.535171       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0203 12:28:33.078011   13136 command_runner.go:130] ! I0203 12:04:59.535208       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0203 12:28:33.078011   13136 command_runner.go:130] ! I0203 12:04:59.555530       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:33.078091   13136 command_runner.go:130] ! I0203 12:04:59.582679       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300\" does not exist"
	I0203 12:28:33.078091   13136 command_runner.go:130] ! I0203 12:04:59.593117       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:33.078091   13136 command_runner.go:130] ! I0203 12:04:59.615597       1 shared_informer.go:320] Caches are synced for expand
	I0203 12:28:33.078173   13136 command_runner.go:130] ! I0203 12:04:59.619951       1 shared_informer.go:320] Caches are synced for taint
	I0203 12:28:33.078173   13136 command_runner.go:130] ! I0203 12:04:59.620233       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0203 12:28:33.078173   13136 command_runner.go:130] ! I0203 12:04:59.621144       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300"
	I0203 12:28:33.078255   13136 command_runner.go:130] ! I0203 12:04:59.621999       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0203 12:28:33.078255   13136 command_runner.go:130] ! I0203 12:04:59.620965       1 shared_informer.go:320] Caches are synced for node
	I0203 12:28:33.078255   13136 command_runner.go:130] ! I0203 12:04:59.622115       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0203 12:28:33.078255   13136 command_runner.go:130] ! I0203 12:04:59.622196       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0203 12:28:33.078255   13136 command_runner.go:130] ! I0203 12:04:59.622213       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0203 12:28:33.078337   13136 command_runner.go:130] ! I0203 12:04:59.622220       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0203 12:28:33.078337   13136 command_runner.go:130] ! I0203 12:04:59.627214       1 shared_informer.go:320] Caches are synced for disruption
	I0203 12:28:33.078337   13136 command_runner.go:130] ! I0203 12:04:59.627299       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0203 12:28:33.078337   13136 command_runner.go:130] ! I0203 12:04:59.627517       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0203 12:28:33.078337   13136 command_runner.go:130] ! I0203 12:04:59.630821       1 shared_informer.go:320] Caches are synced for persistent volume
	I0203 12:28:33.078416   13136 command_runner.go:130] ! I0203 12:04:59.631018       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0203 12:28:33.078416   13136 command_runner.go:130] ! I0203 12:04:59.631607       1 shared_informer.go:320] Caches are synced for PV protection
	I0203 12:28:33.078416   13136 command_runner.go:130] ! I0203 12:04:59.632152       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0203 12:28:33.078416   13136 command_runner.go:130] ! I0203 12:04:59.632358       1 shared_informer.go:320] Caches are synced for service account
	I0203 12:28:33.078495   13136 command_runner.go:130] ! I0203 12:04:59.632692       1 shared_informer.go:320] Caches are synced for cronjob
	I0203 12:28:33.078495   13136 command_runner.go:130] ! I0203 12:04:59.632840       1 shared_informer.go:320] Caches are synced for TTL
	I0203 12:28:33.078495   13136 command_runner.go:130] ! I0203 12:04:59.634133       1 shared_informer.go:320] Caches are synced for GC
	I0203 12:28:33.078495   13136 command_runner.go:130] ! I0203 12:04:59.634183       1 shared_informer.go:320] Caches are synced for namespace
	I0203 12:28:33.078495   13136 command_runner.go:130] ! I0203 12:04:59.637337       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0203 12:28:33.078576   13136 command_runner.go:130] ! I0203 12:04:59.637530       1 shared_informer.go:320] Caches are synced for crt configmap
	I0203 12:28:33.078576   13136 command_runner.go:130] ! I0203 12:04:59.644447       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300" podCIDRs=["10.244.0.0/24"]
	I0203 12:28:33.078576   13136 command_runner.go:130] ! I0203 12:04:59.644496       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.078576   13136 command_runner.go:130] ! I0203 12:04:59.644518       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.078576   13136 command_runner.go:130] ! I0203 12:04:59.647453       1 shared_informer.go:320] Caches are synced for deployment
	I0203 12:28:33.078658   13136 command_runner.go:130] ! I0203 12:04:59.647468       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0203 12:28:33.078658   13136 command_runner.go:130] ! I0203 12:04:59.661087       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:33.078658   13136 command_runner.go:130] ! I0203 12:04:59.662500       1 shared_informer.go:320] Caches are synced for ephemeral
	I0203 12:28:33.078658   13136 command_runner.go:130] ! I0203 12:04:59.679063       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0203 12:28:33.078731   13136 command_runner.go:130] ! I0203 12:04:59.679241       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0203 12:28:33.078731   13136 command_runner.go:130] ! I0203 12:04:59.679489       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:33.078731   13136 command_runner.go:130] ! I0203 12:04:59.679271       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0203 12:28:33.078731   13136 command_runner.go:130] ! I0203 12:04:59.680515       1 shared_informer.go:320] Caches are synced for daemon sets
	I0203 12:28:33.078731   13136 command_runner.go:130] ! I0203 12:04:59.680894       1 shared_informer.go:320] Caches are synced for stateful set
	I0203 12:28:33.078731   13136 command_runner.go:130] ! I0203 12:04:59.682157       1 shared_informer.go:320] Caches are synced for job
	I0203 12:28:33.078810   13136 command_runner.go:130] ! I0203 12:04:59.686733       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0203 12:28:33.078810   13136 command_runner.go:130] ! I0203 12:04:59.688328       1 shared_informer.go:320] Caches are synced for HPA
	I0203 12:28:33.078969   13136 command_runner.go:130] ! I0203 12:04:59.688383       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0203 12:28:33.078969   13136 command_runner.go:130] ! I0203 12:04:59.697934       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0203 12:28:33.079052   13136 command_runner.go:130] ! I0203 12:04:59.698063       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0203 12:28:33.079052   13136 command_runner.go:130] ! I0203 12:04:59.688399       1 shared_informer.go:320] Caches are synced for PVC protection
	I0203 12:28:33.079052   13136 command_runner.go:130] ! I0203 12:04:59.688409       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0203 12:28:33.079052   13136 command_runner.go:130] ! I0203 12:04:59.688419       1 shared_informer.go:320] Caches are synced for attach detach
	I0203 12:28:33.079133   13136 command_runner.go:130] ! I0203 12:04:59.688482       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:33.079165   13136 command_runner.go:130] ! I0203 12:04:59.697636       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:33.079165   13136 command_runner.go:130] ! I0203 12:04:59.697649       1 shared_informer.go:320] Caches are synced for endpoint
	I0203 12:28:33.079196   13136 command_runner.go:130] ! I0203 12:04:59.714625       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:33.079211   13136 command_runner.go:130] ! I0203 12:04:59.714677       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0203 12:28:33.079237   13136 command_runner.go:130] ! I0203 12:04:59.714688       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:00.046777       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:00.818009       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="311.273381ms"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:00.848718       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="30.361418ms"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:00.848801       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.501µs"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:01.040466       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="91.174094ms"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:01.060761       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.181113ms"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:01.062232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="51.701µs"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:21.819966       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:21.843034       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:21.853094       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="295.503µs"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:21.889706       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="83.9µs"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:23.170298       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="56.1µs"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:24.187762       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="23.236374ms"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:24.188513       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="90.9µs"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:24.626780       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:05:26.205271       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:07:57.197252       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m02\" does not exist"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:07:57.213772       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300-m02" podCIDRs=["10.244.1.0/24"]
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:07:57.214096       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:07:57.214387       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:07:57.243166       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:07:57.578919       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:07:58.163164       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:07:59.655130       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m02"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:07:59.772999       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:08:07.534314       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:08:26.797682       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:08:26.797755       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079259   13136 command_runner.go:130] ! I0203 12:08:26.813836       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079787   13136 command_runner.go:130] ! I0203 12:08:28.192212       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079787   13136 command_runner.go:130] ! I0203 12:08:29.680135       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079787   13136 command_runner.go:130] ! I0203 12:08:30.702586       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.079787   13136 command_runner.go:130] ! I0203 12:08:51.029918       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="72.629315ms"
	I0203 12:28:33.079787   13136 command_runner.go:130] ! I0203 12:08:51.048475       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="16.732326ms"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:08:51.049169       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="396.601µs"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:08:51.058159       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="35.9µs"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:08:51.069790       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="40.1µs"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:08:53.787260       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="12.580521ms"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:08:53.787659       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="70.201µs"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:08:53.939078       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="12.55302ms"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:08:53.939506       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="33.801µs"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:08:58.516195       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:09:01.710207       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:30.158978       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m03\" does not exist"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:30.160493       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:30.187436       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300-m03" podCIDRs=["10.244.2.0/24"]
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:30.187486       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:30.187520       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:30.195215       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:30.643712       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:31.194036       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:34.733168       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:34.818129       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:40.541982       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:59.598308       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:59.598384       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:59.613509       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:12:59.761059       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:13:01.072377       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:13:02.975699       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:16:00.817386       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:17:16.830447       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:18:09.728117       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.079867   13136 command_runner.go:130] ! I0203 12:20:44.872410       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:33.080393   13136 command_runner.go:130] ! I0203 12:20:44.874163       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080393   13136 command_runner.go:130] ! I0203 12:20:44.902212       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080393   13136 command_runner.go:130] ! I0203 12:20:50.011997       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080393   13136 command_runner.go:130] ! I0203 12:21:07.487830       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:22:48.017949       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:22:48.044428       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:22:52.915959       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:22:58.370520       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:22:58.373994       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m03\" does not exist"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:22:58.409838       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300-m03" podCIDRs=["10.244.3.0/24"]
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:22:58.410167       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! E0203 12:22:58.438530       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-749300-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-749300-m03" podCIDRs=["10.244.4.0/24"]
	I0203 12:28:33.080474   13136 command_runner.go:130] ! E0203 12:22:58.438947       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-749300-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! E0203 12:22:58.439229       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-749300-m03': failed to patch node CIDR: Node \"multinode-749300-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:22:58.439401       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:22:58.444440       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:22:58.960922       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:22:59.994381       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:23:08.704715       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:23:13.216732       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:23:13.218582       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:23:13.233034       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:23:14.968424       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:23:15.606788       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:24:50.048901       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:24:50.049506       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:24:50.231618       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.080474   13136 command_runner.go:130] ! I0203 12:24:55.449570       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.101987   13136 logs.go:123] Gathering logs for etcd [09707a862965] ...
	I0203 12:28:33.101987   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09707a862965"
	I0203 12:28:33.135709   13136 command_runner.go:130] ! {"level":"warn","ts":"2025-02-03T12:27:21.807150Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.807376Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.25.12.244:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.25.12.244:2380","--initial-cluster=multinode-749300=https://172.25.12.244:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.25.12.244:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.25.12.244:2380","--name=multinode-749300","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.810076Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"warn","ts":"2025-02-03T12:27:21.810110Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.810121Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.25.12.244:2380"]}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.810165Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.813162Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.25.12.244:2379"]}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.815738Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-749300","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.25.12.244:2380"],"listen-peer-urls":["https://172.25.12.244:2380"],"advertise-client-urls":["https://172.25.12.244:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.12.244:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-c
luster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.836502Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"19.618913ms"}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.860600Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.876663Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"bd3b09816c9d03a4","local-member-id":"aee9b6e79987349e","commit-index":2011}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.879122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e switched to configuration voters=()"}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.881202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became follower at term 2"}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.882322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aee9b6e79987349e [peers: [], term: 2, commit: 2011, applied: 0, lastindex: 2011, lastterm: 2]"}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"warn","ts":"2025-02-03T12:27:21.896121Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0203 12:28:33.136587   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.900153Z","caller":"mvcc/kvstore.go:346","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1395}
	I0203 12:28:33.137123   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.903670Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":1746}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.910428Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.919884Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"aee9b6e79987349e","timeout":"7s"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.920678Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"aee9b6e79987349e"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.922572Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"aee9b6e79987349e","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.923543Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924198Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924288Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924338Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e switched to configuration voters=(12603806138002519198)"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.925111Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bd3b09816c9d03a4","local-member-id":"aee9b6e79987349e","added-peer-id":"aee9b6e79987349e","added-peer-peer-urls":["https://172.25.1.53:2380"]}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.926083Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bd3b09816c9d03a4","local-member-id":"aee9b6e79987349e","cluster-version":"3.5"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.926140Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.926075Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.931282Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.932289Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.25.12.244:2380"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.932461Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.25.12.244:2380"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.932990Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aee9b6e79987349e","initial-advertise-peer-urls":["https://172.25.12.244:2380"],"listen-peer-urls":["https://172.25.12.244:2380"],"advertise-client-urls":["https://172.25.12.244:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.12.244:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.933175Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e is starting a new election at term 2"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became pre-candidate at term 2"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e received MsgPreVoteResp from aee9b6e79987349e at term 2"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became candidate at term 3"}
	I0203 12:28:33.137376   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e received MsgVoteResp from aee9b6e79987349e at term 3"}
	I0203 12:28:33.137896   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became leader at term 3"}
	I0203 12:28:33.138183   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aee9b6e79987349e elected leader aee9b6e79987349e at term 3"}
	I0203 12:28:33.138183   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.298589Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aee9b6e79987349e","local-member-attributes":"{Name:multinode-749300 ClientURLs:[https://172.25.12.244:2379]}","request-path":"/0/members/aee9b6e79987349e/attributes","cluster-id":"bd3b09816c9d03a4","publish-timeout":"7s"}
	I0203 12:28:33.138183   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.298815Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0203 12:28:33.138183   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.299061Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0203 12:28:33.138183   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.301663Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0203 12:28:33.138183   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.301847Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0203 12:28:33.138183   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.306842Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0203 12:28:33.138183   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.310094Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0203 12:28:33.138183   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.312993Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0203 12:28:33.138183   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.319087Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.12.244:2379"}
	I0203 12:28:33.144988   13136 logs.go:123] Gathering logs for kube-scheduler [88c40ca9aa3c] ...
	I0203 12:28:33.144988   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c40ca9aa3c"
	I0203 12:28:33.181652   13136 command_runner.go:130] ! I0203 12:04:50.173813       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:33.181964   13136 command_runner.go:130] ! W0203 12:04:52.061949       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0203 12:28:33.181964   13136 command_runner.go:130] ! W0203 12:04:52.062136       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:33.182060   13136 command_runner.go:130] ! W0203 12:04:52.062240       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0203 12:28:33.182060   13136 command_runner.go:130] ! W0203 12:04:52.062322       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0203 12:28:33.182060   13136 command_runner.go:130] ! I0203 12:04:52.183111       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0203 12:28:33.182060   13136 command_runner.go:130] ! I0203 12:04:52.183265       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:33.182134   13136 command_runner.go:130] ! I0203 12:04:52.186981       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0203 12:28:33.182174   13136 command_runner.go:130] ! I0203 12:04:52.187238       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 12:28:33.182251   13136 command_runner.go:130] ! I0203 12:04:52.187329       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:33.182251   13136 command_runner.go:130] ! I0203 12:04:52.190286       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:33.182251   13136 command_runner.go:130] ! W0203 12:04:52.193791       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0203 12:28:33.182251   13136 command_runner.go:130] ! E0203 12:04:52.193853       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182251   13136 command_runner.go:130] ! W0203 12:04:52.194153       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0203 12:28:33.182251   13136 command_runner.go:130] ! E0203 12:04:52.194308       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182251   13136 command_runner.go:130] ! W0203 12:04:52.194637       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:33.182251   13136 command_runner.go:130] ! E0203 12:04:52.195017       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182251   13136 command_runner.go:130] ! W0203 12:04:52.194800       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0203 12:28:33.182251   13136 command_runner.go:130] ! E0203 12:04:52.195139       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182251   13136 command_runner.go:130] ! W0203 12:04:52.194975       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0203 12:28:33.182251   13136 command_runner.go:130] ! E0203 12:04:52.195284       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182251   13136 command_runner.go:130] ! W0203 12:04:52.196729       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0203 12:28:33.182251   13136 command_runner.go:130] ! E0203 12:04:52.197161       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182251   13136 command_runner.go:130] ! W0203 12:04:52.196961       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0203 12:28:33.182781   13136 command_runner.go:130] ! E0203 12:04:52.197453       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182822   13136 command_runner.go:130] ! W0203 12:04:52.197005       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:33.182822   13136 command_runner.go:130] ! E0203 12:04:52.197828       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182822   13136 command_runner.go:130] ! W0203 12:04:52.197050       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0203 12:28:33.182822   13136 command_runner.go:130] ! E0203 12:04:52.198044       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182822   13136 command_runner.go:130] ! W0203 12:04:52.197096       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0203 12:28:33.182822   13136 command_runner.go:130] ! E0203 12:04:52.198641       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182822   13136 command_runner.go:130] ! W0203 12:04:52.200812       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:33.182822   13136 command_runner.go:130] ! E0203 12:04:52.201002       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0203 12:28:33.182822   13136 command_runner.go:130] ! W0203 12:04:52.201197       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0203 12:28:33.182822   13136 command_runner.go:130] ! E0203 12:04:52.201287       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182822   13136 command_runner.go:130] ! W0203 12:04:52.201462       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:33.182822   13136 command_runner.go:130] ! E0203 12:04:52.201749       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.182822   13136 command_runner.go:130] ! W0203 12:04:52.203997       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0203 12:28:33.182822   13136 command_runner.go:130] ! E0203 12:04:52.204039       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183392   13136 command_runner.go:130] ! W0203 12:04:52.204263       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:33.183440   13136 command_runner.go:130] ! E0203 12:04:52.204370       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183482   13136 command_runner.go:130] ! W0203 12:04:52.204862       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:33.183527   13136 command_runner.go:130] ! E0203 12:04:52.205088       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183568   13136 command_runner.go:130] ! W0203 12:04:53.007728       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:33.183615   13136 command_runner.go:130] ! E0203 12:04:53.008599       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183663   13136 command_runner.go:130] ! W0203 12:04:53.048183       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0203 12:28:33.183749   13136 command_runner.go:130] ! E0203 12:04:53.048434       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183796   13136 command_runner.go:130] ! W0203 12:04:53.164447       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0203 12:28:33.183837   13136 command_runner.go:130] ! E0203 12:04:53.165061       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183837   13136 command_runner.go:130] ! W0203 12:04:53.169067       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0203 12:28:33.183837   13136 command_runner.go:130] ! E0203 12:04:53.169917       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183837   13136 command_runner.go:130] ! W0203 12:04:53.247439       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:33.183837   13136 command_runner.go:130] ! E0203 12:04:53.247628       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183837   13136 command_runner.go:130] ! W0203 12:04:53.427203       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0203 12:28:33.183837   13136 command_runner.go:130] ! E0203 12:04:53.427543       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183837   13136 command_runner.go:130] ! W0203 12:04:53.471735       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:33.183837   13136 command_runner.go:130] ! E0203 12:04:53.471980       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183837   13136 command_runner.go:130] ! W0203 12:04:53.482216       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0203 12:28:33.183837   13136 command_runner.go:130] ! E0203 12:04:53.482267       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183837   13136 command_runner.go:130] ! W0203 12:04:53.497579       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0203 12:28:33.183837   13136 command_runner.go:130] ! E0203 12:04:53.497628       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.183837   13136 command_runner.go:130] ! W0203 12:04:53.544588       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:33.184373   13136 command_runner.go:130] ! E0203 12:04:53.545097       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0203 12:28:33.184425   13136 command_runner.go:130] ! W0203 12:04:53.614992       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0203 12:28:33.184462   13136 command_runner.go:130] ! E0203 12:04:53.615323       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.184538   13136 command_runner.go:130] ! W0203 12:04:53.655102       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0203 12:28:33.184597   13136 command_runner.go:130] ! E0203 12:04:53.655499       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.184626   13136 command_runner.go:130] ! W0203 12:04:53.655303       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0203 12:28:33.184626   13136 command_runner.go:130] ! E0203 12:04:53.656094       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.184626   13136 command_runner.go:130] ! W0203 12:04:53.713710       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:33.184626   13136 command_runner.go:130] ! E0203 12:04:53.713767       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.184626   13136 command_runner.go:130] ! W0203 12:04:53.764352       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0203 12:28:33.184626   13136 command_runner.go:130] ! E0203 12:04:53.764706       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.184626   13136 command_runner.go:130] ! W0203 12:04:53.799751       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:33.184626   13136 command_runner.go:130] ! E0203 12:04:53.800034       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:33.184626   13136 command_runner.go:130] ! I0203 12:04:56.288855       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:33.184626   13136 command_runner.go:130] ! I0203 12:25:02.182209       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0203 12:28:33.184626   13136 command_runner.go:130] ! I0203 12:25:02.205551       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 12:28:33.184626   13136 command_runner.go:130] ! I0203 12:25:02.205980       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0203 12:28:33.184626   13136 command_runner.go:130] ! E0203 12:25:02.233103       1 run.go:72] "command failed" err="finished without leader elect"
	I0203 12:28:33.197589   13136 logs.go:123] Gathering logs for kube-proxy [c6dc514e98f6] ...
	I0203 12:28:33.197589   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6dc514e98f6"
	I0203 12:28:33.226851   13136 command_runner.go:130] ! I0203 12:05:01.746820       1 server_linux.go:66] "Using iptables proxy"
	I0203 12:28:33.226920   13136 command_runner.go:130] ! E0203 12:05:01.780088       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:33.226920   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0203 12:28:33.226920   13136 command_runner.go:130] ! 	add table ip kube-proxy
	I0203 12:28:33.226920   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:33.226920   13136 command_runner.go:130] !  >
	I0203 12:28:33.226920   13136 command_runner.go:130] ! E0203 12:05:01.805329       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:33.226920   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0203 12:28:33.226920   13136 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0203 12:28:33.226920   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:33.226920   13136 command_runner.go:130] !  >
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.822582       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.1.53"]
	I0203 12:28:33.226920   13136 command_runner.go:130] ! E0203 12:05:01.822737       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.878001       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.878049       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.878079       1 server_linux.go:170] "Using iptables Proxier"
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.883741       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.884139       1 server.go:497] "Version info" version="v1.32.1"
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.884172       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.886194       1 config.go:199] "Starting service config controller"
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.886246       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.886272       1 config.go:105] "Starting endpoint slice config controller"
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.886277       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.886976       1 config.go:329] "Starting node config controller"
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.887004       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.987328       1 shared_informer.go:320] Caches are synced for node config
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.987379       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0203 12:28:33.226920   13136 command_runner.go:130] ! I0203 12:05:01.987536       1 shared_informer.go:320] Caches are synced for service config
	I0203 12:28:33.230083   13136 logs.go:123] Gathering logs for kindnet [644890f5738e] ...
	I0203 12:28:33.230600   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 644890f5738e"
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:27:27.922584       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:27:27.925544       1 main.go:139] hostIP = 172.25.12.244
	I0203 12:28:33.257414   13136 command_runner.go:130] ! podIP = 172.25.12.244
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:27:27.925723       1 main.go:148] setting mtu 1500 for CNI 
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:27:27.925791       1 main.go:178] kindnetd IP family: "ipv4"
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:27:27.925960       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:27:28.656536       1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
	I0203 12:28:33.257414   13136 command_runner.go:130] ! add table inet kindnet-network-policies
	I0203 12:28:33.257414   13136 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:33.257414   13136 command_runner.go:130] ! , skipping network policies
	I0203 12:28:33.257414   13136 command_runner.go:130] ! W0203 12:27:58.664159       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0203 12:28:33.257414   13136 command_runner.go:130] ! E0203 12:27:58.664461       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:08.665271       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:08.665332       1 main.go:301] handling current node
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:08.666606       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:08.666704       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:08.667036       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.25.8.35 Flags: [] Table: 0 Realm: 0} 
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:08.667510       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:08.667530       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:08.668238       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.0.54 Flags: [] Table: 0 Realm: 0} 
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:18.657872       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:18.658001       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:18.658271       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:18.658397       1 main.go:301] handling current node
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:18.658413       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:18.658420       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:28.657620       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:28.658189       1 main.go:301] handling current node
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:28.658424       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:33.257414   13136 command_runner.go:130] ! I0203 12:28:28.658517       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:33.257951   13136 command_runner.go:130] ! I0203 12:28:28.658702       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:33.257951   13136 command_runner.go:130] ! I0203 12:28:28.659037       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:33.261308   13136 logs.go:123] Gathering logs for coredns [edb5f00f1042] ...
	I0203 12:28:33.261393   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edb5f00f1042"
	I0203 12:28:33.288244   13136 command_runner.go:130] > .:53
	I0203 12:28:33.288244   13136 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3e8130cfa8e96169e54fdb81903f9b4680c96074b93281de316a617894d613269c265db78cbf1be00f04df6f27627d689838921ad115c7f1fadc26b632a43f17
	I0203 12:28:33.288244   13136 command_runner.go:130] > CoreDNS-1.11.3
	I0203 12:28:33.288244   13136 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0203 12:28:33.288244   13136 command_runner.go:130] > [INFO] 127.0.0.1:49536 - 20223 "HINFO IN 8316577845745372206.6425600211286211531. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049207769s
	I0203 12:28:33.290200   13136 logs.go:123] Gathering logs for kube-apiserver [6c19e0a0ba9c] ...
	I0203 12:28:33.290200   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c19e0a0ba9c"
	I0203 12:28:33.321760   13136 command_runner.go:130] ! W0203 12:27:22.209566       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:22.212385       1 options.go:238] external host was not specified, using 172.25.12.244
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:22.215411       1 server.go:143] Version: v1.32.1
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:22.215519       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:22.961695       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:22.981400       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:22.991076       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:22.991179       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:22.995374       1 instance.go:233] Using reconciler: lease
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:23.455051       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0203 12:28:33.321760   13136 command_runner.go:130] ! W0203 12:27:23.455431       1 genericapiserver.go:767] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:23.772863       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:23.773118       1 apis.go:106] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:24.011206       1 apis.go:106] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:24.156938       1 apis.go:106] API group "resource.k8s.io" is not enabled, skipping.
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:24.167831       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0203 12:28:33.321760   13136 command_runner.go:130] ! W0203 12:27:24.167952       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.321760   13136 command_runner.go:130] ! W0203 12:27:24.167965       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:24.168630       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0203 12:28:33.321760   13136 command_runner.go:130] ! W0203 12:27:24.168731       1 genericapiserver.go:767] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:24.169810       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0203 12:28:33.321760   13136 command_runner.go:130] ! I0203 12:27:24.170800       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0203 12:28:33.321760   13136 command_runner.go:130] ! W0203 12:27:24.170918       1 genericapiserver.go:767] Skipping API autoscaling/v2beta1 because it has no resources.
	I0203 12:28:33.322297   13136 command_runner.go:130] ! W0203 12:27:24.170928       1 genericapiserver.go:767] Skipping API autoscaling/v2beta2 because it has no resources.
	I0203 12:28:33.322297   13136 command_runner.go:130] ! I0203 12:27:24.172706       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0203 12:28:33.322297   13136 command_runner.go:130] ! W0203 12:27:24.172818       1 genericapiserver.go:767] Skipping API batch/v1beta1 because it has no resources.
	I0203 12:28:33.322385   13136 command_runner.go:130] ! I0203 12:27:24.173842       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0203 12:28:33.322385   13136 command_runner.go:130] ! W0203 12:27:24.173955       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.322385   13136 command_runner.go:130] ! W0203 12:27:24.173976       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:33.322385   13136 command_runner.go:130] ! I0203 12:27:24.174699       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0203 12:28:33.322463   13136 command_runner.go:130] ! W0203 12:27:24.174807       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.322463   13136 command_runner.go:130] ! W0203 12:27:24.174815       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1alpha2 because it has no resources.
	I0203 12:28:33.322463   13136 command_runner.go:130] ! I0203 12:27:24.175562       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0203 12:28:33.322463   13136 command_runner.go:130] ! W0203 12:27:24.175675       1 genericapiserver.go:767] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.322542   13136 command_runner.go:130] ! I0203 12:27:24.177712       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0203 12:28:33.322542   13136 command_runner.go:130] ! W0203 12:27:24.177817       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.322542   13136 command_runner.go:130] ! W0203 12:27:24.177827       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:33.322542   13136 command_runner.go:130] ! I0203 12:27:24.178337       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0203 12:28:33.322542   13136 command_runner.go:130] ! W0203 12:27:24.178525       1 genericapiserver.go:767] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.322618   13136 command_runner.go:130] ! W0203 12:27:24.178534       1 genericapiserver.go:767] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:33.322618   13136 command_runner.go:130] ! I0203 12:27:24.179521       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0203 12:28:33.322618   13136 command_runner.go:130] ! W0203 12:27:24.179622       1 genericapiserver.go:767] Skipping API policy/v1beta1 because it has no resources.
	I0203 12:28:33.322618   13136 command_runner.go:130] ! I0203 12:27:24.181744       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0203 12:28:33.322618   13136 command_runner.go:130] ! W0203 12:27:24.181838       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.322697   13136 command_runner.go:130] ! W0203 12:27:24.181848       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:33.322697   13136 command_runner.go:130] ! I0203 12:27:24.182574       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0203 12:28:33.322697   13136 command_runner.go:130] ! W0203 12:27:24.182612       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.322697   13136 command_runner.go:130] ! W0203 12:27:24.182619       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:33.322775   13136 command_runner.go:130] ! I0203 12:27:24.185237       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0203 12:28:33.322775   13136 command_runner.go:130] ! W0203 12:27:24.185340       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.322775   13136 command_runner.go:130] ! W0203 12:27:24.185438       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:33.322775   13136 command_runner.go:130] ! I0203 12:27:24.187067       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0203 12:28:33.322775   13136 command_runner.go:130] ! W0203 12:27:24.187189       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta3 because it has no resources.
	I0203 12:28:33.322858   13136 command_runner.go:130] ! W0203 12:27:24.187200       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0203 12:28:33.322858   13136 command_runner.go:130] ! W0203 12:27:24.187204       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.322858   13136 command_runner.go:130] ! I0203 12:27:24.193311       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0203 12:28:33.322858   13136 command_runner.go:130] ! W0203 12:27:24.193504       1 genericapiserver.go:767] Skipping API apps/v1beta2 because it has no resources.
	I0203 12:28:33.322858   13136 command_runner.go:130] ! W0203 12:27:24.193516       1 genericapiserver.go:767] Skipping API apps/v1beta1 because it has no resources.
	I0203 12:28:33.322858   13136 command_runner.go:130] ! I0203 12:27:24.195828       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0203 12:28:33.322942   13136 command_runner.go:130] ! W0203 12:27:24.195943       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.322942   13136 command_runner.go:130] ! W0203 12:27:24.195952       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:33.322942   13136 command_runner.go:130] ! I0203 12:27:24.196821       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0203 12:28:33.322942   13136 command_runner.go:130] ! W0203 12:27:24.196925       1 genericapiserver.go:767] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.322942   13136 command_runner.go:130] ! I0203 12:27:24.210087       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0203 12:28:33.323025   13136 command_runner.go:130] ! W0203 12:27:24.210106       1 genericapiserver.go:767] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:33.323025   13136 command_runner.go:130] ! I0203 12:27:24.794572       1 secure_serving.go:213] Serving securely on [::]:8443
	I0203 12:28:33.323025   13136 command_runner.go:130] ! I0203 12:27:24.794794       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0203 12:28:33.323102   13136 command_runner.go:130] ! I0203 12:27:24.795068       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:33.323102   13136 command_runner.go:130] ! I0203 12:27:24.795407       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:33.323102   13136 command_runner.go:130] ! I0203 12:27:24.802046       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:33.323102   13136 command_runner.go:130] ! I0203 12:27:24.802388       1 local_available_controller.go:156] Starting LocalAvailability controller
	I0203 12:28:33.323102   13136 command_runner.go:130] ! I0203 12:27:24.802453       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I0203 12:28:33.323102   13136 command_runner.go:130] ! I0203 12:27:24.803591       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I0203 12:28:33.323181   13136 command_runner.go:130] ! I0203 12:27:24.803646       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0203 12:28:33.323181   13136 command_runner.go:130] ! I0203 12:27:24.803948       1 controller.go:78] Starting OpenAPI AggregationController
	I0203 12:28:33.323181   13136 command_runner.go:130] ! I0203 12:27:24.804549       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0203 12:28:33.323181   13136 command_runner.go:130] ! I0203 12:27:24.805072       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I0203 12:28:33.323257   13136 command_runner.go:130] ! I0203 12:27:24.805137       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I0203 12:28:33.323257   13136 command_runner.go:130] ! I0203 12:27:24.805149       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0203 12:28:33.323257   13136 command_runner.go:130] ! I0203 12:27:24.805622       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I0203 12:28:33.323337   13136 command_runner.go:130] ! I0203 12:27:24.805888       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I0203 12:28:33.323337   13136 command_runner.go:130] ! I0203 12:27:24.806059       1 aggregator.go:169] waiting for initial CRD sync...
	I0203 12:28:33.323337   13136 command_runner.go:130] ! I0203 12:27:24.806071       1 cluster_authentication_trust_controller.go:462] Starting cluster_authentication_trust_controller controller
	I0203 12:28:33.323337   13136 command_runner.go:130] ! I0203 12:27:24.806336       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0203 12:28:33.323337   13136 command_runner.go:130] ! I0203 12:27:24.815482       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:33.323413   13136 command_runner.go:130] ! I0203 12:27:24.815778       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:33.323413   13136 command_runner.go:130] ! I0203 12:27:24.857328       1 controller.go:142] Starting OpenAPI controller
	I0203 12:28:33.323413   13136 command_runner.go:130] ! I0203 12:27:24.857674       1 controller.go:90] Starting OpenAPI V3 controller
	I0203 12:28:33.323413   13136 command_runner.go:130] ! I0203 12:27:24.857889       1 naming_controller.go:294] Starting NamingConditionController
	I0203 12:28:33.323413   13136 command_runner.go:130] ! I0203 12:27:24.858090       1 establishing_controller.go:81] Starting EstablishingController
	I0203 12:28:33.323493   13136 command_runner.go:130] ! I0203 12:27:24.858264       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0203 12:28:33.323493   13136 command_runner.go:130] ! I0203 12:27:24.858511       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0203 12:28:33.323493   13136 command_runner.go:130] ! I0203 12:27:24.858696       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0203 12:28:33.323493   13136 command_runner.go:130] ! I0203 12:27:24.805624       1 controller.go:119] Starting legacy_token_tracking_controller
	I0203 12:28:33.323493   13136 command_runner.go:130] ! I0203 12:27:24.859559       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0203 12:28:33.323569   13136 command_runner.go:130] ! I0203 12:27:24.859779       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0203 12:28:33.323569   13136 command_runner.go:130] ! I0203 12:27:24.859901       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0203 12:28:33.323569   13136 command_runner.go:130] ! I0203 12:27:24.805642       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0203 12:28:33.323569   13136 command_runner.go:130] ! I0203 12:27:24.805842       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I0203 12:28:33.323569   13136 command_runner.go:130] ! I0203 12:27:24.960247       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0203 12:28:33.323569   13136 command_runner.go:130] ! I0203 12:27:24.962958       1 aggregator.go:171] initial CRD sync complete...
	I0203 12:28:33.323648   13136 command_runner.go:130] ! I0203 12:27:24.963020       1 autoregister_controller.go:144] Starting autoregister controller
	I0203 12:28:33.323648   13136 command_runner.go:130] ! I0203 12:27:24.963034       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0203 12:28:33.323648   13136 command_runner.go:130] ! I0203 12:27:24.983465       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0203 12:28:33.323648   13136 command_runner.go:130] ! I0203 12:27:24.983682       1 policy_source.go:240] refreshing policies
	I0203 12:28:33.323648   13136 command_runner.go:130] ! I0203 12:27:24.988524       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0203 12:28:33.323724   13136 command_runner.go:130] ! I0203 12:27:25.002635       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0203 12:28:33.323724   13136 command_runner.go:130] ! I0203 12:27:25.006114       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0203 12:28:33.323724   13136 command_runner.go:130] ! I0203 12:27:25.007504       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0203 12:28:33.323724   13136 command_runner.go:130] ! I0203 12:27:25.021232       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0203 12:28:33.323724   13136 command_runner.go:130] ! I0203 12:27:25.021549       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0203 12:28:33.323803   13136 command_runner.go:130] ! I0203 12:27:25.021784       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0203 12:28:33.323803   13136 command_runner.go:130] ! I0203 12:27:25.040252       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0203 12:28:33.323803   13136 command_runner.go:130] ! I0203 12:27:25.063391       1 cache.go:39] Caches are synced for autoregister controller
	I0203 12:28:33.323803   13136 command_runner.go:130] ! I0203 12:27:25.063942       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0203 12:28:33.323803   13136 command_runner.go:130] ! I0203 12:27:25.064322       1 shared_informer.go:320] Caches are synced for configmaps
	I0203 12:28:33.323879   13136 command_runner.go:130] ! I0203 12:27:25.809340       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0203 12:28:33.323879   13136 command_runner.go:130] ! I0203 12:27:25.881836       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0203 12:28:33.323879   13136 command_runner.go:130] ! W0203 12:27:26.443758       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.25.12.244]
	I0203 12:28:33.323879   13136 command_runner.go:130] ! I0203 12:27:26.447833       1 controller.go:615] quota admission added evaluator for: endpoints
	I0203 12:28:33.323879   13136 command_runner.go:130] ! I0203 12:27:26.461396       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0203 12:28:33.323879   13136 command_runner.go:130] ! I0203 12:27:27.972522       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0203 12:28:33.323960   13136 command_runner.go:130] ! I0203 12:27:28.290141       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0203 12:28:33.323960   13136 command_runner.go:130] ! I0203 12:27:28.509424       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0203 12:28:33.323960   13136 command_runner.go:130] ! I0203 12:27:28.520726       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0203 12:28:33.323960   13136 command_runner.go:130] ! I0203 12:27:28.561004       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0203 12:28:33.332197   13136 logs.go:123] Gathering logs for coredns [fe91a8d012ae] ...
	I0203 12:28:33.332197   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe91a8d012ae"
	I0203 12:28:33.362203   13136 command_runner.go:130] > .:53
	I0203 12:28:33.362203   13136 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3e8130cfa8e96169e54fdb81903f9b4680c96074b93281de316a617894d613269c265db78cbf1be00f04df6f27627d689838921ad115c7f1fadc26b632a43f17
	I0203 12:28:33.362203   13136 command_runner.go:130] > CoreDNS-1.11.3
	I0203 12:28:33.362203   13136 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0203 12:28:33.362203   13136 command_runner.go:130] > [INFO] 127.0.0.1:49376 - 54533 "HINFO IN 5545318737342419956.4498205497283969299. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.271697251s
	I0203 12:28:33.362203   13136 command_runner.go:130] > [INFO] 10.244.1.2:43143 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000594006s
	I0203 12:28:33.362203   13136 command_runner.go:130] > [INFO] 10.244.1.2:44943 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.183348242s
	I0203 12:28:33.362203   13136 command_runner.go:130] > [INFO] 10.244.1.2:36646 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.156236585s
	I0203 12:28:33.362203   13136 command_runner.go:130] > [INFO] 10.244.1.2:58135 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.085964402s
	I0203 12:28:33.362203   13136 command_runner.go:130] > [INFO] 10.244.0.3:55647 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000429704s
	I0203 12:28:33.362203   13136 command_runner.go:130] > [INFO] 10.244.0.3:43653 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000173402s
	I0203 12:28:33.362425   13136 command_runner.go:130] > [INFO] 10.244.0.3:39125 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000093801s
	I0203 12:28:33.362425   13136 command_runner.go:130] > [INFO] 10.244.0.3:43285 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000234602s
	I0203 12:28:33.362425   13136 command_runner.go:130] > [INFO] 10.244.1.2:49861 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157602s
	I0203 12:28:33.362425   13136 command_runner.go:130] > [INFO] 10.244.1.2:59079 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024886436s
	I0203 12:28:33.362425   13136 command_runner.go:130] > [INFO] 10.244.1.2:56014 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155402s
	I0203 12:28:33.362425   13136 command_runner.go:130] > [INFO] 10.244.1.2:49501 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115101s
	I0203 12:28:33.362425   13136 command_runner.go:130] > [INFO] 10.244.1.2:59809 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.029540479s
	I0203 12:28:33.362517   13136 command_runner.go:130] > [INFO] 10.244.1.2:45190 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184901s
	I0203 12:28:33.362517   13136 command_runner.go:130] > [INFO] 10.244.1.2:58561 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000207002s
	I0203 12:28:33.362517   13136 command_runner.go:130] > [INFO] 10.244.1.2:54547 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108101s
	I0203 12:28:33.362517   13136 command_runner.go:130] > [INFO] 10.244.0.3:52767 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140901s
	I0203 12:28:33.362517   13136 command_runner.go:130] > [INFO] 10.244.0.3:48199 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000275502s
	I0203 12:28:33.362608   13136 command_runner.go:130] > [INFO] 10.244.0.3:40769 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194202s
	I0203 12:28:33.362608   13136 command_runner.go:130] > [INFO] 10.244.0.3:56613 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000241303s
	I0203 12:28:33.362608   13136 command_runner.go:130] > [INFO] 10.244.0.3:36390 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000127501s
	I0203 12:28:33.362608   13136 command_runner.go:130] > [INFO] 10.244.0.3:49253 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150501s
	I0203 12:28:33.362688   13136 command_runner.go:130] > [INFO] 10.244.0.3:53291 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115601s
	I0203 12:28:33.362688   13136 command_runner.go:130] > [INFO] 10.244.0.3:37098 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000782s
	I0203 12:28:33.362727   13136 command_runner.go:130] > [INFO] 10.244.1.2:47927 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154002s
	I0203 12:28:33.362727   13136 command_runner.go:130] > [INFO] 10.244.1.2:49855 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156202s
	I0203 12:28:33.362727   13136 command_runner.go:130] > [INFO] 10.244.1.2:51176 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114201s
	I0203 12:28:33.362727   13136 command_runner.go:130] > [INFO] 10.244.1.2:45626 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156701s
	I0203 12:28:33.362802   13136 command_runner.go:130] > [INFO] 10.244.0.3:33142 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141402s
	I0203 12:28:33.362802   13136 command_runner.go:130] > [INFO] 10.244.0.3:36637 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000249602s
	I0203 12:28:33.362802   13136 command_runner.go:130] > [INFO] 10.244.0.3:34293 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135301s
	I0203 12:28:33.362802   13136 command_runner.go:130] > [INFO] 10.244.0.3:59245 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112701s
	I0203 12:28:33.362884   13136 command_runner.go:130] > [INFO] 10.244.1.2:56139 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200702s
	I0203 12:28:33.362884   13136 command_runner.go:130] > [INFO] 10.244.1.2:53567 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131301s
	I0203 12:28:33.362884   13136 command_runner.go:130] > [INFO] 10.244.1.2:55778 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000182502s
	I0203 12:28:33.362884   13136 command_runner.go:130] > [INFO] 10.244.1.2:53486 - 5 "PTR IN 1.0.25.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000163702s
	I0203 12:28:33.362884   13136 command_runner.go:130] > [INFO] 10.244.0.3:52745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191702s
	I0203 12:28:33.362884   13136 command_runner.go:130] > [INFO] 10.244.0.3:38587 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132301s
	I0203 12:28:33.362974   13136 command_runner.go:130] > [INFO] 10.244.0.3:53685 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078101s
	I0203 12:28:33.362974   13136 command_runner.go:130] > [INFO] 10.244.0.3:38406 - 5 "PTR IN 1.0.25.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000076301s
	I0203 12:28:33.362974   13136 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0203 12:28:33.362974   13136 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0203 12:28:33.365786   13136 logs.go:123] Gathering logs for Docker ...
	I0203 12:28:33.365786   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0203 12:28:33.398090   13136 command_runner.go:130] > Feb 03 12:25:59 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:33.398090   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:33.398180   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:33.398180   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:33.398180   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0203 12:28:33.398180   13136 command_runner.go:130] > Feb 03 12:26:00 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:33.398180   13136 command_runner.go:130] > Feb 03 12:26:00 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:33.398311   13136 command_runner.go:130] > Feb 03 12:26:00 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:33.398311   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0203 12:28:33.398311   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0203 12:28:33.398311   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:33.398400   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:33.398400   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:33.398400   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:33.398400   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0203 12:28:33.398479   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:33.398479   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:33.398479   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:33.398555   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0203 12:28:33.398555   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0203 12:28:33.398555   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:33.398555   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:33.398555   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:33.398635   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:33.398635   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0203 12:28:33.398635   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:33.398635   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:33.398712   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:33.398712   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0203 12:28:33.398712   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0203 12:28:33.398712   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0203 12:28:33.398712   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:33.398788   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:33.398788   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 systemd[1]: Starting Docker Application Container Engine...
	I0203 12:28:33.398788   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[651]: time="2025-02-03T12:26:45.380727146Z" level=info msg="Starting up"
	I0203 12:28:33.398864   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[651]: time="2025-02-03T12:26:45.381865516Z" level=info msg="containerd not running, starting managed containerd"
	I0203 12:28:33.398864   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[651]: time="2025-02-03T12:26:45.382773073Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=657
	I0203 12:28:33.398864   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.412550323Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0203 12:28:33.398941   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440135738Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0203 12:28:33.398941   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440206542Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0203 12:28:33.399017   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440329250Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0203 12:28:33.399017   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440352551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.399017   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441207804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:33.399091   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441394816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.399091   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441695635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:33.399165   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441819442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.399165   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441843144Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:33.399165   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441855545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.399165   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.442535887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.399241   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.443428142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.399241   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.446651543Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:33.399315   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.446752549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.399390   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.446913259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:33.399390   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.447005465Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0203 12:28:33.399390   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.447482194Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0203 12:28:33.399473   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.447592401Z" level=info msg="metadata content store policy set" policy=shared
	I0203 12:28:33.399473   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452471104Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0203 12:28:33.399473   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452580211Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0203 12:28:33.399473   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452605613Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0203 12:28:33.399548   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452624714Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0203 12:28:33.399548   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452641915Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0203 12:28:33.399548   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452717520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0203 12:28:33.399625   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453010238Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0203 12:28:33.399625   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453128145Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0203 12:28:33.399666   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453147046Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0203 12:28:33.399666   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453162147Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0203 12:28:33.399702   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453177448Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453199850Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453215851Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453237552Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453360460Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453415663Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453522870Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453541271Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453563972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453580773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453596174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453611675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453625276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453640377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453653878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453667779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453687080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453703481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453716682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453729883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453743884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453761485Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453785086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453804587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453818788Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453867591Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0203 12:28:33.399759   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453971798Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0203 12:28:33.400294   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454021201Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0203 12:28:33.400294   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454132008Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0203 12:28:33.400378   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454147409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.400378   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454163610Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0203 12:28:33.400378   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454175210Z" level=info msg="NRI interface is disabled by configuration."
	I0203 12:28:33.400378   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454622938Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0203 12:28:33.400460   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454857953Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0203 12:28:33.400495   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454980660Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0203 12:28:33.400495   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.455105168Z" level=info msg="containerd successfully booted in 0.044680s"
	I0203 12:28:33.400495   13136 command_runner.go:130] > Feb 03 12:26:46 multinode-749300 dockerd[651]: time="2025-02-03T12:26:46.439313185Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0203 12:28:33.400564   13136 command_runner.go:130] > Feb 03 12:26:46 multinode-749300 dockerd[651]: time="2025-02-03T12:26:46.630975852Z" level=info msg="Loading containers: start."
	I0203 12:28:33.400564   13136 command_runner.go:130] > Feb 03 12:26:46 multinode-749300 dockerd[651]: time="2025-02-03T12:26:46.949194693Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0203 12:28:33.400640   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.095120348Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0203 12:28:33.400640   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.212617937Z" level=info msg="Loading containers: done."
	I0203 12:28:33.400640   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.238410035Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0203 12:28:33.400640   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.238496541Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0203 12:28:33.400717   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.238529943Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0203 12:28:33.400717   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.239396503Z" level=info msg="Daemon has completed initialization"
	I0203 12:28:33.400717   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.279910027Z" level=info msg="API listen on /var/run/docker.sock"
	I0203 12:28:33.400792   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 systemd[1]: Started Docker Application Container Engine.
	I0203 12:28:33.400792   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.280075738Z" level=info msg="API listen on [::]:2376"
	I0203 12:28:33.400792   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.298017161Z" level=info msg="Processing signal 'terminated'"
	I0203 12:28:33.400792   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 systemd[1]: Stopping Docker Application Container Engine...
	I0203 12:28:33.400792   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.300466075Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0203 12:28:33.400871   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.301181479Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0203 12:28:33.400871   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.301265080Z" level=info msg="Daemon shutdown complete"
	I0203 12:28:33.400871   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.301434281Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0203 12:28:33.400871   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 systemd[1]: docker.service: Deactivated successfully.
	I0203 12:28:33.400959   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 systemd[1]: Stopped Docker Application Container Engine.
	I0203 12:28:33.400959   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 systemd[1]: Starting Docker Application Container Engine...
	I0203 12:28:33.400959   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:12.352956833Z" level=info msg="Starting up"
	I0203 12:28:33.400959   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:12.353893039Z" level=info msg="containerd not running, starting managed containerd"
	I0203 12:28:33.400959   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:12.356231552Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1107
	I0203 12:28:33.400959   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.387763834Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0203 12:28:33.401074   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415379693Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0203 12:28:33.401074   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415427893Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0203 12:28:33.401074   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415503993Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0203 12:28:33.401074   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415521293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.401074   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415552594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:33.401187   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415571594Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.401187   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415753695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:33.401187   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415875095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.401270   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415895996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:33.401270   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415907496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.401270   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415998596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.401347   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.416122597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.401347   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419383016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:33.401347   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419448316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:33.401427   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419602317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:33.401427   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419703417Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0203 12:28:33.401427   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419732118Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0203 12:28:33.401506   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419761418Z" level=info msg="metadata content store policy set" policy=shared
	I0203 12:28:33.401506   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420025019Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0203 12:28:33.401506   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420117020Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0203 12:28:33.401581   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420135220Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0203 12:28:33.401581   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420150320Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0203 12:28:33.401581   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420168320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0203 12:28:33.401581   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420220020Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0203 12:28:33.401655   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420554522Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0203 12:28:33.401655   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420715123Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0203 12:28:33.401655   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420811824Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0203 12:28:33.401655   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420833624Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0203 12:28:33.401759   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420853524Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.401759   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420879824Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.401820   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420897724Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.401820   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420912624Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.401866   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420991825Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.401893   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421007125Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.401893   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421021725Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.401893   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421034325Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0203 12:28:33.401893   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421059025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.401893   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421075725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.401990   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421090525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.401990   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421104726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.401990   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421118126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.401990   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421132126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.401990   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421150126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.402108   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421166226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.402108   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421188326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.402108   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421206126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.402108   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421218626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.402202   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421231326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.402202   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421244126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.402202   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421262126Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0203 12:28:33.402202   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421286927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.402202   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421299927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.402320   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421316127Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0203 12:28:33.402320   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421657629Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0203 12:28:33.402320   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421699929Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0203 12:28:33.402320   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421719729Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0203 12:28:33.402445   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421738629Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0203 12:28:33.402445   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421749929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0203 12:28:33.402522   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421767729Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0203 12:28:33.402522   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421781429Z" level=info msg="NRI interface is disabled by configuration."
	I0203 12:28:33.402522   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422100631Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0203 12:28:33.402522   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422251132Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0203 12:28:33.402600   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422392333Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0203 12:28:33.402600   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422418033Z" level=info msg="containerd successfully booted in 0.035603s"
	I0203 12:28:33.402600   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.403475080Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0203 12:28:33.402600   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.431623642Z" level=info msg="Loading containers: start."
	I0203 12:28:33.402675   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.675130644Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0203 12:28:33.402749   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.788922499Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0203 12:28:33.402749   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.907280980Z" level=info msg="Loading containers: done."
	I0203 12:28:33.402749   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.932910027Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0203 12:28:33.402749   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.932994128Z" level=info msg="Daemon has completed initialization"
	I0203 12:28:33.402824   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.970542044Z" level=info msg="API listen on /var/run/docker.sock"
	I0203 12:28:33.402824   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.970691945Z" level=info msg="API listen on [::]:2376"
	I0203 12:28:33.402824   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 systemd[1]: Started Docker Application Container Engine.
	I0203 12:28:33.402824   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:33.402898   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:33.402898   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:33.402898   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:33.402974   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0203 12:28:33.402974   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Loaded network plugin cni"
	I0203 12:28:33.403006   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0203 12:28:33.403132   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Start cri-dockerd grpc backend"
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:19Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-58667487b6-zgvmd_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"efcd217a3204d8ee4b03ebb412109a32b1b008fc65b7434e2087e8fa5429c03b\""
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:19Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-v2gkp_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"26e5557dc32ce42e41eb095169017d71cd452b2e90ecede8972ab6dfa8c841ac\""
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.731892062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.732069764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.732104064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.732632967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.742524924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.742776225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.742902026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.743145327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787449782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787596483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787637083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787820284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818198959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818289160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818451361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818555561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403777   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/264f9c1c2c05f544f10a0af503e7dfb16c8eaf7dab55a12d747c05df02b07807/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:33.403777   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d8732fe7d2435b888ee9c1bdc8f366b2cd23fe7a47230b5e0b7e6e97547fb30e/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:33.403777   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e2da6b5a5bd1b22ed0d0ef9ab7fd9a0874f1357443511e898b07fbae5f28d3d0/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:33.403852   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc833a943f11f228aa4ef7daceca6bf4fd4096e22ee6354cc8afb177b0dc3db5/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.377130176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.378256483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.378462184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.378972087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.423087341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.424963652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.426916563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.427886269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.440196639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.440916544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.442061550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.442305352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.453876818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.454104020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.454340021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.454632323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:25Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474743418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474833119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474852519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474952220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.403882   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502675379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.404407   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502746480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.404407   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502760180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.404407   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502846980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.404482   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507587807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.404516   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507657108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.404516   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507682008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507809209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c4912e7d3383ee7e383387115cfa625509cdb8edff08db473311607d723e4d67/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1eece224f54eb90d32ca17e53dec80b8ad8db63a733127cae7ce39832c944127/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c682ff8834bf472070d7ef8557ee1391dcfffd86e9b6a29c668eee4fe700e342/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010215801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010492502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010590603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010742104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.013544220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.013678021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.013710621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.014126823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145033877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145181177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145225278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145314878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:57.589562586Z" level=info msg="ignoring event" container=edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:57.590947498Z" level=info msg="shim disconnected" id=edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578 namespace=moby
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:57.591492803Z" level=warning msg="cleaning up after shim disconnected" id=edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578 namespace=moby
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:57.591599004Z" level=info msg="cleaning up dead shim" namespace=moby
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.013597299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.013673700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.404590   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.013692300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.405116   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.014212603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.405116   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223402731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.405116   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223571532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.405116   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223587232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.405204   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223671032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.405240   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.236644911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.405271   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.237659918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.405271   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.237678218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.405271   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.238007320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.405271   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:28:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d290c79ddbf8dbaaae0ac6ae29ff1695c351eb244341bb86dfa66bd51e407af5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0203 12:28:33.405271   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:28:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ac5f0bf5197cf2f2f9c600a6d9f77ea7775ba4c80a3a3c30272ea8dc42d9f4e2/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:33.405409   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.741947665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.405448   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.742072666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.405494   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.742088066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.405521   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.742520068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.405558   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783254697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:33.405558   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783521498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:33.405592   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783775700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.405642   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783932101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:33.433844   13136 logs.go:123] Gathering logs for container status ...
	I0203 12:28:33.433844   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 12:28:33.506019   13136 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0203 12:28:33.506019   13136 command_runner.go:130] > edb5f00f10420       c69fa2e9cbf5f                                                                                         3 seconds ago        Running             coredns                   1                   ac5f0bf5197cf       coredns-668d6bf9bc-v2gkp
	I0203 12:28:33.506019   13136 command_runner.go:130] > 0ff3e07f2982f       8c811b4aec35f                                                                                         3 seconds ago        Running             busybox                   1                   d290c79ddbf8d       busybox-58667487b6-zgvmd
	I0203 12:28:33.506019   13136 command_runner.go:130] > 7cbc7a552a4c3       6e38f40d628db                                                                                         23 seconds ago       Running             storage-provisioner       2                   1eece224f54eb       storage-provisioner
	I0203 12:28:33.506019   13136 command_runner.go:130] > 644890f5738e5       d300845f67aeb                                                                                         About a minute ago   Running             kindnet-cni               1                   c682ff8834bf4       kindnet-h6m57
	I0203 12:28:33.506019   13136 command_runner.go:130] > edf3d4284acbb       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   1eece224f54eb       storage-provisioner
	I0203 12:28:33.506019   13136 command_runner.go:130] > cf33452e72443       e29f9c7391fd9                                                                                         About a minute ago   Running             kube-proxy                1                   c4912e7d3383e       kube-proxy-9g92t
	I0203 12:28:33.506019   13136 command_runner.go:130] > 09707a8629658       a9e7e6b294baf                                                                                         About a minute ago   Running             etcd                      0                   fc833a943f11f       etcd-multinode-749300
	I0203 12:28:33.506019   13136 command_runner.go:130] > 2e43c2ecb4a92       2b0d6572d062c                                                                                         About a minute ago   Running             kube-scheduler            1                   e2da6b5a5bd1b       kube-scheduler-multinode-749300
	I0203 12:28:33.506019   13136 command_runner.go:130] > fa5ab1df89857       019ee182b58e2                                                                                         About a minute ago   Running             kube-controller-manager   1                   d8732fe7d2435       kube-controller-manager-multinode-749300
	I0203 12:28:33.506019   13136 command_runner.go:130] > 6c19e0a0ba9c0       95c0bda56fc4d                                                                                         About a minute ago   Running             kube-apiserver            0                   264f9c1c2c05f       kube-apiserver-multinode-749300
	I0203 12:28:33.506019   13136 command_runner.go:130] > f42690726d50f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   efcd217a3204d       busybox-58667487b6-zgvmd
	I0203 12:28:33.506019   13136 command_runner.go:130] > fe91a8d012aee       c69fa2e9cbf5f                                                                                         23 minutes ago       Exited              coredns                   0                   26e5557dc32ce       coredns-668d6bf9bc-v2gkp
	I0203 12:28:33.506689   13136 command_runner.go:130] > fab2d9be6b5c7       kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26              23 minutes ago       Exited              kindnet-cni               0                   cb49b32ba0852       kindnet-h6m57
	I0203 12:28:33.506711   13136 command_runner.go:130] > c6dc514e98f69       e29f9c7391fd9                                                                                         23 minutes ago       Exited              kube-proxy                0                   1ff01fa7d8c67       kube-proxy-9g92t
	I0203 12:28:33.506711   13136 command_runner.go:130] > 8ade10c0fb096       019ee182b58e2                                                                                         23 minutes ago       Exited              kube-controller-manager   0                   b1b473818438d       kube-controller-manager-multinode-749300
	I0203 12:28:33.506711   13136 command_runner.go:130] > 88c40ca9aa3cb       2b0d6572d062c                                                                                         23 minutes ago       Exited              kube-scheduler            0                   d8d9e598659ff       kube-scheduler-multinode-749300
	I0203 12:28:33.509303   13136 logs.go:123] Gathering logs for kubelet ...
	I0203 12:28:33.509303   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 12:28:33.544390   13136 command_runner.go:130] > Feb 03 12:27:15 multinode-749300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0203 12:28:33.544390   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: I0203 12:27:16.085338    1502 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0203 12:28:33.544390   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: I0203 12:27:16.085444    1502 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:33.544390   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: I0203 12:27:16.086383    1502 server.go:954] "Client rotation is on, will bootstrap in background"
	I0203 12:28:33.544390   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: E0203 12:27:16.086828    1502 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0203 12:28:33.544390   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:33.544390   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0203 12:28:33.545304   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: I0203 12:27:16.848200    1552 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: I0203 12:27:16.848394    1552 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: I0203 12:27:16.848741    1552 server.go:954] "Client rotation is on, will bootstrap in background"
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: E0203 12:27:16.848794    1552 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:17 multinode-749300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.655843    1646 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.655920    1646 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.656491    1646 server.go:954] "Client rotation is on, will bootstrap in background"
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.660314    1646 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.685411    1646 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.712367    1646 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.712421    1646 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.719067    1646 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.719190    1646 server.go:841] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0203 12:28:33.545373   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720010    1646 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0203 12:28:33.546131   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720060    1646 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-749300","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I0203 12:28:33.546172   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720250    1646 topology_manager.go:138] "Creating topology manager with none policy"
	I0203 12:28:33.546172   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720261    1646 container_manager_linux.go:304] "Creating device plugin manager"
	I0203 12:28:33.546172   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720394    1646 state_mem.go:36] "Initialized new in-memory state store"
	I0203 12:28:33.546263   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722746    1646 kubelet.go:446] "Attempting to sync node with API server"
	I0203 12:28:33.546263   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722858    1646 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0203 12:28:33.546263   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722878    1646 kubelet.go:352] "Adding apiserver pod source"
	I0203 12:28:33.546263   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722889    1646 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0203 12:28:33.546352   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.728476    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:33.546352   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.728558    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:33.546432   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.730384    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:33.546432   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.730414    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:33.546511   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.730516    1646 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="docker" version="27.4.0" apiVersion="v1"
	I0203 12:28:33.546511   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.732095    1646 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0203 12:28:33.546511   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.732504    1646 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0203 12:28:33.546587   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.737572    1646 watchdog_linux.go:99] "Systemd watchdog is not enabled"
	I0203 12:28:33.546587   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.737778    1646 server.go:1287] "Started kubelet"
	I0203 12:28:33.546587   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.742490    1646 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0203 12:28:33.546665   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.747263    1646 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.25.12.244:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-749300.1820b26d8c29f858  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-749300,UID:multinode-749300,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-749300,},FirstTimestamp:2025-02-03 12:27:19.73775164 +0000 UTC m=+0.175845113,LastTimestamp:2025-02-03 12:27:19.73775164 +0000 UTC m=+0.175845113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-7493
00,}"
	I0203 12:28:33.546742   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.753450    1646 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
	I0203 12:28:33.546742   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.755438    1646 server.go:490] "Adding debug handlers to kubelet server"
	I0203 12:28:33.546742   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.757330    1646 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0203 12:28:33.546742   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.759063    1646 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I0203 12:28:33.546820   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.759618    1646 volume_manager.go:297] "Starting Kubelet Volume Manager"
	I0203 12:28:33.546820   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.760084    1646 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0203 12:28:33.546820   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.760301    1646 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-749300\" not found"
	I0203 12:28:33.546820   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.763820    1646 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0203 12:28:33.546899   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.766190    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="200ms"
	I0203 12:28:33.546899   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.775750    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:33.546983   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.775896    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:33.546983   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.776304    1646 factory.go:221] Registration of the systemd container factory successfully
	I0203 12:28:33.546983   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.776423    1646 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0203 12:28:33.547061   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.776477    1646 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0203 12:28:33.547061   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.822393    1646 cpu_manager.go:221] "Starting CPU manager" policy="none"
	I0203 12:28:33.547061   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.822414    1646 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
	I0203 12:28:33.547138   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.822433    1646 state_mem.go:36] "Initialized new in-memory state store"
	I0203 12:28:33.547138   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823729    1646 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0203 12:28:33.547138   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823782    1646 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0203 12:28:33.547138   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823807    1646 policy_none.go:49] "None policy: Start"
	I0203 12:28:33.547138   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823820    1646 memory_manager.go:186] "Starting memorymanager" policy="None"
	I0203 12:28:33.547216   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823833    1646 state_mem.go:35] "Initializing new in-memory state store"
	I0203 12:28:33.547216   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.824575    1646 state_mem.go:75] "Updated machine memory state"
	I0203 12:28:33.547216   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.827550    1646 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0203 12:28:33.547216   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.828214    1646 eviction_manager.go:189] "Eviction manager: starting control loop"
	I0203 12:28:33.547294   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.828323    1646 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0203 12:28:33.547294   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.834439    1646 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0203 12:28:33.547294   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.836223    1646 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I0203 12:28:33.547372   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.836276    1646 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-749300\" not found"
	I0203 12:28:33.547372   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.839763    1646 reconciler.go:26] "Reconciler: start to sync state"
	I0203 12:28:33.547372   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.849152    1646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0203 12:28:33.547372   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.851786    1646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0203 12:28:33.547372   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.851873    1646 status_manager.go:227] "Starting to sync pod status with apiserver"
	I0203 12:28:33.547450   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.852167    1646 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I0203 12:28:33.547450   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.852266    1646 kubelet.go:2388] "Starting kubelet main sync loop"
	I0203 12:28:33.547450   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.852425    1646 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0203 12:28:33.547528   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.857733    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:33.547606   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.857872    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:33.547606   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.865017    1646 iptables.go:577] "Could not set up iptables canary" err=<
	I0203 12:28:33.547606   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0203 12:28:33.547606   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0203 12:28:33.547684   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0203 12:28:33.547684   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0203 12:28:33.547684   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.930098    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:33.547684   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.931495    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:33.547762   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.959594    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.547762   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.959988    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ff01fa7d8c67a792cac128e6be46aba4b9713e4a6cd005178a2573c7a847c7a"
	I0203 12:28:33.547762   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965523    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1b473818438dbd2e6a91783e24fae500384dbe88b88a3ed9dd8d9c8f4724a7a"
	I0203 12:28:33.547839   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965561    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16d03cfd685dc52d880c67a5a5040dfd6dcf7d2477c368b0b221099fe19d0fc3"
	I0203 12:28:33.547839   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965576    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8d9e598659ff21f0255dbdf0fe1e487760842b470492b0b4377fb2491bf3f17"
	I0203 12:28:33.547839   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965587    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3c93fcfaa46c30cca46747853d168923992fa34e3ab48bd74f55818221180a9"
	I0203 12:28:33.547916   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.966435    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.547916   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.969099    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="400ms"
	I0203 12:28:33.547916   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.969271    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efcd217a3204d8ee4b03ebb412109a32b1b008fc65b7434e2087e8fa5429c03b"
	I0203 12:28:33.547993   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.994181    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26e5557dc32ce42e41eb095169017d71cd452b2e90ecede8972ab6dfa8c841ac"
	I0203 12:28:33.548040   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.008325    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a166f3c8776d2abb8f173e76ba48d9aa5c71b04d34638145a7d22b947e0b1e16"
	I0203 12:28:33.548077   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.024782    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb49b32ba0852c35cd9bd014b8dc9ccfc93a2c6a7d911bdd6baaba575c4e1d80"
	I0203 12:28:33.548101   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.026552    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.548129   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.027031    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.548176   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046040    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-kubeconfig\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:33.548215   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046195    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:33.548260   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046258    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a4dc8a8db691940bb17375ec22c0921e-kubeconfig\") pod \"kube-scheduler-multinode-749300\" (UID: \"a4dc8a8db691940bb17375ec22c0921e\") " pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:33.548299   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046319    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/f85eb916773a482447e41aa40aaff233-etcd-certs\") pod \"etcd-multinode-749300\" (UID: \"f85eb916773a482447e41aa40aaff233\") " pod="kube-system/etcd-multinode-749300"
	I0203 12:28:33.548344   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046369    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20275825c8d44051c01f8d920b297acd-ca-certs\") pod \"kube-apiserver-multinode-749300\" (UID: \"20275825c8d44051c01f8d920b297acd\") " pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:33.548383   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046389    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20275825c8d44051c01f8d920b297acd-k8s-certs\") pod \"kube-apiserver-multinode-749300\" (UID: \"20275825c8d44051c01f8d920b297acd\") " pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:33.548436   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046407    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20275825c8d44051c01f8d920b297acd-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-749300\" (UID: \"20275825c8d44051c01f8d920b297acd\") " pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:33.548483   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046425    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-ca-certs\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:33.548518   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046445    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/f85eb916773a482447e41aa40aaff233-etcd-data\") pod \"etcd-multinode-749300\" (UID: \"f85eb916773a482447e41aa40aaff233\") " pod="kube-system/etcd-multinode-749300"
	I0203 12:28:33.548556   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046466    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-flexvolume-dir\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:33.548629   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046483    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-k8s-certs\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:33.548663   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.134568    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:33.548663   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.136458    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.371298    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="800ms"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.537888    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.538850    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: W0203 12:27:20.642530    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.642673    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: W0203 12:27:20.718728    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.718775    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: W0203 12:27:20.727487    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.727666    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: I0203 12:27:21.096615    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2da6b5a5bd1b22ed0d0ef9ab7fd9a0874f1357443511e898b07fbae5f28d3d0"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: I0203 12:27:21.117402    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc833a943f11f228aa4ef7daceca6bf4fd4096e22ee6354cc8afb177b0dc3db5"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: E0203 12:27:21.172766    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="1.6s"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: W0203 12:27:21.239099    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: E0203 12:27:21.239402    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: I0203 12:27:21.341008    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: E0203 12:27:21.342386    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:33.548700   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.155943    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.549226   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.168589    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.549264   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.184520    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.549384   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.192380    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: I0203 12:27:22.944384    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.220031    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.221067    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.221592    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.222217    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: E0203 12:27:24.222471    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: E0203 12:27:24.222938    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: E0203 12:27:24.223334    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: I0203 12:27:24.962104    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.072863    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-multinode-749300\" already exists" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.072916    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.096600    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-multinode-749300\" already exists" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.096649    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.100835    1646 kubelet_node_status.go:125] "Node was previously registered" node="multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.101001    1646 kubelet_node_status.go:79] "Successfully registered node" node="multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.101046    1646 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.102196    1646 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.103579    1646 setters.go:602] "Node became not ready" node="multinode-749300" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-03T12:27:25Z","lastTransitionTime":"2025-02-03T12:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.123635    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-multinode-749300\" already exists" pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.123696    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.143136    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-749300\" already exists" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.231645    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:33.549421   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.250920    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-749300\" already exists" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:33.549946   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.733100    1646 apiserver.go:52] "Watching apiserver"
	I0203 12:28:33.549946   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.740335    1646 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-749300" podUID="b18ba461-b225-4090-8341-159171502b52"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.740880    1646 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-749300" podUID="c751851c-68ee-4c15-80ca-32642fcf2a5a"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.741767    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.743201    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.768020    1646 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.798228    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67c155d5-fb9b-42f5-8e64-865c44a5d4e6-xtables-lock\") pod \"kindnet-h6m57\" (UID: \"67c155d5-fb9b-42f5-8e64-865c44a5d4e6\") " pod="kube-system/kindnet-h6m57"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799102    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4c991afa-7bb0-4d52-bded-22d68037b5ae-tmp\") pod \"storage-provisioner\" (UID: \"4c991afa-7bb0-4d52-bded-22d68037b5ae\") " pod="kube-system/storage-provisioner"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799171    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1709b874-4fee-41f5-8d30-24912b2fa725-xtables-lock\") pod \"kube-proxy-9g92t\" (UID: \"1709b874-4fee-41f5-8d30-24912b2fa725\") " pod="kube-system/kube-proxy-9g92t"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799205    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1709b874-4fee-41f5-8d30-24912b2fa725-lib-modules\") pod \"kube-proxy-9g92t\" (UID: \"1709b874-4fee-41f5-8d30-24912b2fa725\") " pod="kube-system/kube-proxy-9g92t"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799246    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/67c155d5-fb9b-42f5-8e64-865c44a5d4e6-cni-cfg\") pod \"kindnet-h6m57\" (UID: \"67c155d5-fb9b-42f5-8e64-865c44a5d4e6\") " pod="kube-system/kindnet-h6m57"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799264    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67c155d5-fb9b-42f5-8e64-865c44a5d4e6-lib-modules\") pod \"kindnet-h6m57\" (UID: \"67c155d5-fb9b-42f5-8e64-865c44a5d4e6\") " pod="kube-system/kindnet-h6m57"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799337    1646 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799426    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.799386    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.800808    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:26.300655438 +0000 UTC m=+6.738748911 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.812299    1646 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.812369    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.843057    1646 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.862699    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.549983   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.862730    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.550544   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.862793    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:26.362774296 +0000 UTC m=+6.800867869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.550577   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.898492    1646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8703dd831250f30e213efd5fca131d7" path="/var/lib/kubelet/pods/a8703dd831250f30e213efd5fca131d7/volumes"
	I0203 12:28:33.550615   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.899802    1646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cea8016677ee73c66077ce584fb15354" path="/var/lib/kubelet/pods/cea8016677ee73c66077ce584fb15354/volumes"
	I0203 12:28:33.550696   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.952875    1646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-749300" podStartSLOduration=0.952857614 podStartE2EDuration="952.857614ms" podCreationTimestamp="2025-02-03 12:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-03 12:27:25.937443526 +0000 UTC m=+6.375537099" watchObservedRunningTime="2025-02-03 12:27:25.952857614 +0000 UTC m=+6.390951187"
	I0203 12:28:33.550737   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.974229    1646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-749300" podStartSLOduration=0.974210637 podStartE2EDuration="974.210637ms" podCreationTimestamp="2025-02-03 12:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-03 12:27:25.953477018 +0000 UTC m=+6.391570591" watchObservedRunningTime="2025-02-03 12:27:25.974210637 +0000 UTC m=+6.412304110"
	I0203 12:28:33.550776   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.303818    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:33.550810   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.303893    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:27.303876335 +0000 UTC m=+7.741969908 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:33.550883   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.405407    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.550883   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.405530    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.550957   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.405596    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:27.40557752 +0000 UTC m=+7.843670993 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.550996   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.315813    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:33.551031   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.317831    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:29.317806871 +0000 UTC m=+9.755900344 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:33.551069   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.416628    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.551103   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.416661    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.551177   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.416713    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:29.41669654 +0000 UTC m=+9.854790013 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.551215   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.861806    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.551250   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.862570    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.551289   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.336385    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:33.551362   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.336563    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:33.336541991 +0000 UTC m=+13.774635464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:33.551397   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.437576    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.551428   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.437923    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.551490   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.438074    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:33.438050975 +0000 UTC m=+13.876144448 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.551520   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.853969    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.551578   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.853720    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.551636   13136 command_runner.go:130] > Feb 03 12:27:31 multinode-749300 kubelet[1646]: E0203 12:27:31.852706    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:31 multinode-749300 kubelet[1646]: E0203 12:27:31.853391    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.369187    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.369409    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:41.369390703 +0000 UTC m=+21.807484276 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.470103    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.470221    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.470291    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:41.470271952 +0000 UTC m=+21.908365425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.853533    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.854435    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:35 multinode-749300 kubelet[1646]: E0203 12:27:35.853643    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:35 multinode-749300 kubelet[1646]: E0203 12:27:35.854148    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:37 multinode-749300 kubelet[1646]: E0203 12:27:37.852924    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:37 multinode-749300 kubelet[1646]: E0203 12:27:37.853434    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:39 multinode-749300 kubelet[1646]: E0203 12:27:39.861767    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:39 multinode-749300 kubelet[1646]: E0203 12:27:39.862616    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.551656   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.448061    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:33.552181   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.448222    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:57.44820293 +0000 UTC m=+37.886296403 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:33.552217   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.549425    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.552262   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.549465    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.552292   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.549520    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:57.549504632 +0000 UTC m=+37.987598205 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.552292   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.852817    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552292   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.853419    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552292   13136 command_runner.go:130] > Feb 03 12:27:43 multinode-749300 kubelet[1646]: E0203 12:27:43.853585    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552292   13136 command_runner.go:130] > Feb 03 12:27:43 multinode-749300 kubelet[1646]: E0203 12:27:43.854245    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552292   13136 command_runner.go:130] > Feb 03 12:27:45 multinode-749300 kubelet[1646]: E0203 12:27:45.853520    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552292   13136 command_runner.go:130] > Feb 03 12:27:45 multinode-749300 kubelet[1646]: E0203 12:27:45.857915    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552292   13136 command_runner.go:130] > Feb 03 12:27:47 multinode-749300 kubelet[1646]: E0203 12:27:47.853864    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552292   13136 command_runner.go:130] > Feb 03 12:27:47 multinode-749300 kubelet[1646]: E0203 12:27:47.854661    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:49 multinode-749300 kubelet[1646]: E0203 12:27:49.854481    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:49 multinode-749300 kubelet[1646]: E0203 12:27:49.855863    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:51 multinode-749300 kubelet[1646]: E0203 12:27:51.853472    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:51 multinode-749300 kubelet[1646]: E0203 12:27:51.854452    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:53 multinode-749300 kubelet[1646]: E0203 12:27:53.859668    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:53 multinode-749300 kubelet[1646]: E0203 12:27:53.860055    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:55 multinode-749300 kubelet[1646]: E0203 12:27:55.853633    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:55 multinode-749300 kubelet[1646]: E0203 12:27:55.854320    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.494848    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.494935    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:28:29.494917969 +0000 UTC m=+69.933011442 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.595875    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.595906    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.595961    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:28:29.595942441 +0000 UTC m=+70.034036014 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.853654    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.854513    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: I0203 12:27:57.906113    1646 scope.go:117] "RemoveContainer" containerID="a6484d4fc4d7f6ee26b1c4c1afc10f9bfba5b7f80f2181e9727f163daaf58ce6"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: I0203 12:27:57.907138    1646 scope.go:117] "RemoveContainer" containerID="edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.910890    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(4c991afa-7bb0-4d52-bded-22d68037b5ae)\"" pod="kube-system/storage-provisioner" podUID="4c991afa-7bb0-4d52-bded-22d68037b5ae"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:59 multinode-749300 kubelet[1646]: E0203 12:27:59.855276    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:27:59 multinode-749300 kubelet[1646]: E0203 12:27:59.856164    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:28:01 multinode-749300 kubelet[1646]: E0203 12:28:01.853743    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:28:01 multinode-749300 kubelet[1646]: E0203 12:28:01.854049    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:28:03 multinode-749300 kubelet[1646]: E0203 12:28:03.853330    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:28:03 multinode-749300 kubelet[1646]: E0203 12:28:03.853968    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:28:05 multinode-749300 kubelet[1646]: E0203 12:28:05.853538    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:28:05 multinode-749300 kubelet[1646]: E0203 12:28:05.854181    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:28:07 multinode-749300 kubelet[1646]: E0203 12:28:07.853789    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:28:07 multinode-749300 kubelet[1646]: E0203 12:28:07.854093    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:28:09 multinode-749300 kubelet[1646]: E0203 12:28:09.860674    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:28:09 multinode-749300 kubelet[1646]: E0203 12:28:09.861267    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:33.552861   13136 command_runner.go:130] > Feb 03 12:28:10 multinode-749300 kubelet[1646]: I0203 12:28:10.015143    1646 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	I0203 12:28:33.553857   13136 command_runner.go:130] > Feb 03 12:28:10 multinode-749300 kubelet[1646]: I0203 12:28:10.852780    1646 scope.go:117] "RemoveContainer" containerID="edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578"
	I0203 12:28:33.553857   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]: I0203 12:28:19.875787    1646 scope.go:117] "RemoveContainer" containerID="ebc67da1b9e9ac10747758e3a934f19f5572ae8668d2a69f7d6ee1682387d02a"
	I0203 12:28:33.553897   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]: E0203 12:28:19.883953    1646 iptables.go:577] "Could not set up iptables canary" err=<
	I0203 12:28:33.553932   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0203 12:28:33.553962   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0203 12:28:33.553962   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0203 12:28:33.553962   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0203 12:28:33.554044   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]: I0203 12:28:19.923723    1646 scope.go:117] "RemoveContainer" containerID="e3efb81aa459abda7cc19b8607aa9d2bc56a837cc325e672683ffa4a9d05876b"
	I0203 12:28:33.554044   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 kubelet[1646]: I0203 12:28:30.439871    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d290c79ddbf8dbaaae0ac6ae29ff1695c351eb244341bb86dfa66bd51e407af5"
	I0203 12:28:33.554085   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 kubelet[1646]: I0203 12:28:30.451444    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac5f0bf5197cf2f2f9c600a6d9f77ea7775ba4c80a3a3c30272ea8dc42d9f4e2"
	I0203 12:28:33.602268   13136 logs.go:123] Gathering logs for describe nodes ...
	I0203 12:28:33.602268   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0203 12:28:33.903854   13136 command_runner.go:130] > Name:               multinode-749300
	I0203 12:28:33.903854   13136 command_runner.go:130] > Roles:              control-plane
	I0203 12:28:33.903854   13136 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     kubernetes.io/hostname=multinode-749300
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     kubernetes.io/os=linux
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     minikube.k8s.io/name=multinode-749300
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_03T12_04_56_0700
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0203 12:28:33.903854   13136 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0203 12:28:33.903854   13136 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0203 12:28:33.903854   13136 command_runner.go:130] > CreationTimestamp:  Mon, 03 Feb 2025 12:04:52 +0000
	I0203 12:28:33.903854   13136 command_runner.go:130] > Taints:             <none>
	I0203 12:28:33.903854   13136 command_runner.go:130] > Unschedulable:      false
	I0203 12:28:33.903854   13136 command_runner.go:130] > Lease:
	I0203 12:28:33.903854   13136 command_runner.go:130] >   HolderIdentity:  multinode-749300
	I0203 12:28:33.903854   13136 command_runner.go:130] >   AcquireTime:     <unset>
	I0203 12:28:33.903854   13136 command_runner.go:130] >   RenewTime:       Mon, 03 Feb 2025 12:28:25 +0000
	I0203 12:28:33.903854   13136 command_runner.go:130] > Conditions:
	I0203 12:28:33.903854   13136 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0203 12:28:33.903854   13136 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0203 12:28:33.903854   13136 command_runner.go:130] >   MemoryPressure   False   Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0203 12:28:33.903854   13136 command_runner.go:130] >   DiskPressure     False   Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0203 12:28:33.903854   13136 command_runner.go:130] >   PIDPressure      False   Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0203 12:28:33.903854   13136 command_runner.go:130] >   Ready            True    Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:28:10 +0000   KubeletReady                 kubelet is posting ready status
	I0203 12:28:33.903854   13136 command_runner.go:130] > Addresses:
	I0203 12:28:33.903854   13136 command_runner.go:130] >   InternalIP:  172.25.12.244
	I0203 12:28:33.903854   13136 command_runner.go:130] >   Hostname:    multinode-749300
	I0203 12:28:33.903854   13136 command_runner.go:130] > Capacity:
	I0203 12:28:33.903854   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:33.903854   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:33.904844   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:33.904844   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:33.904844   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:33.904844   13136 command_runner.go:130] > Allocatable:
	I0203 12:28:33.904844   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:33.904844   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:33.904844   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:33.904844   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:33.904844   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:33.904844   13136 command_runner.go:130] > System Info:
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Machine ID:                 aa9fbed762e844a2902d570b7040a1f0
	I0203 12:28:33.904844   13136 command_runner.go:130] >   System UUID:                69ffc0f0-a1d7-9e4e-97f3-ed54041f4203
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Boot ID:                    d8bb3b39-ca1e-4113-9882-57d63502f9b2
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Kernel Version:             5.10.207
	I0203 12:28:33.904844   13136 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Operating System:           linux
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Architecture:               amd64
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0203 12:28:33.904844   13136 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0203 12:28:33.904844   13136 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0203 12:28:33.904844   13136 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0203 12:28:33.904844   13136 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0203 12:28:33.904844   13136 command_runner.go:130] >   default                     busybox-58667487b6-zgvmd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0203 12:28:33.904844   13136 command_runner.go:130] >   kube-system                 coredns-668d6bf9bc-v2gkp                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0203 12:28:33.904844   13136 command_runner.go:130] >   kube-system                 etcd-multinode-749300                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         68s
	I0203 12:28:33.904844   13136 command_runner.go:130] >   kube-system                 kindnet-h6m57                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0203 12:28:33.904844   13136 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-749300             250m (12%)    0 (0%)      0 (0%)           0 (0%)         68s
	I0203 12:28:33.904844   13136 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-749300    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:33.904844   13136 command_runner.go:130] >   kube-system                 kube-proxy-9g92t                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:33.904844   13136 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-749300             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:33.904844   13136 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:33.904844   13136 command_runner.go:130] > Allocated resources:
	I0203 12:28:33.904844   13136 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Resource           Requests     Limits
	I0203 12:28:33.904844   13136 command_runner.go:130] >   --------           --------     ------
	I0203 12:28:33.904844   13136 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0203 12:28:33.904844   13136 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0203 12:28:33.904844   13136 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0203 12:28:33.904844   13136 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0203 12:28:33.904844   13136 command_runner.go:130] > Events:
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Type     Reason                   Age                From             Message
	I0203 12:28:33.904844   13136 command_runner.go:130] >   ----     ------                   ----               ----             -------
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Normal   Starting                 23m                kube-proxy       
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Normal   Starting                 65s                kube-proxy       
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Normal   Starting                 23m                kubelet          Starting kubelet.
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Normal   NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Normal   NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Normal   NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    23m                kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Normal   NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:33.904844   13136 command_runner.go:130] >   Normal   NodeHasSufficientMemory  23m                kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Normal   NodeHasSufficientPID     23m                kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Normal   Starting                 23m                kubelet          Starting kubelet.
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Normal   RegisteredNode           23m                node-controller  Node multinode-749300 event: Registered Node multinode-749300 in Controller
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Normal   NodeReady                23m                kubelet          Node multinode-749300 status is now: NodeReady
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Normal   Starting                 74s                kubelet          Starting kubelet.
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Normal   NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Normal   NodeHasSufficientPID     74s (x7 over 74s)  kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Normal   NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Warning  Rebooted                 68s                kubelet          Node multinode-749300 has been rebooted, boot id: d8bb3b39-ca1e-4113-9882-57d63502f9b2
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Normal   RegisteredNode           65s                node-controller  Node multinode-749300 event: Registered Node multinode-749300 in Controller
	I0203 12:28:33.905846   13136 command_runner.go:130] > Name:               multinode-749300-m02
	I0203 12:28:33.905846   13136 command_runner.go:130] > Roles:              <none>
	I0203 12:28:33.905846   13136 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     kubernetes.io/hostname=multinode-749300-m02
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     kubernetes.io/os=linux
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     minikube.k8s.io/name=multinode-749300
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_03T12_07_57_0700
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0203 12:28:33.905846   13136 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0203 12:28:33.905846   13136 command_runner.go:130] > CreationTimestamp:  Mon, 03 Feb 2025 12:07:57 +0000
	I0203 12:28:33.905846   13136 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0203 12:28:33.905846   13136 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0203 12:28:33.905846   13136 command_runner.go:130] > Unschedulable:      false
	I0203 12:28:33.905846   13136 command_runner.go:130] > Lease:
	I0203 12:28:33.905846   13136 command_runner.go:130] >   HolderIdentity:  multinode-749300-m02
	I0203 12:28:33.905846   13136 command_runner.go:130] >   AcquireTime:     <unset>
	I0203 12:28:33.905846   13136 command_runner.go:130] >   RenewTime:       Mon, 03 Feb 2025 12:24:25 +0000
	I0203 12:28:33.905846   13136 command_runner.go:130] > Conditions:
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0203 12:28:33.905846   13136 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0203 12:28:33.905846   13136 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:33.905846   13136 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:33.905846   13136 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Ready            Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:33.905846   13136 command_runner.go:130] > Addresses:
	I0203 12:28:33.905846   13136 command_runner.go:130] >   InternalIP:  172.25.8.35
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Hostname:    multinode-749300-m02
	I0203 12:28:33.905846   13136 command_runner.go:130] > Capacity:
	I0203 12:28:33.905846   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:33.905846   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:33.905846   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:33.905846   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:33.905846   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:33.905846   13136 command_runner.go:130] > Allocatable:
	I0203 12:28:33.905846   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:33.905846   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:33.905846   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:33.905846   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:33.905846   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:33.905846   13136 command_runner.go:130] > System Info:
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Machine ID:                 90c62936ba5d4d0aaeb17fe1abbb7ffd
	I0203 12:28:33.905846   13136 command_runner.go:130] >   System UUID:                4e05b2a5-08ff-3741-b04f-b8bc068a3e3b
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Boot ID:                    4aec9dc0-92f8-4c4d-b16a-206948ca045d
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Kernel Version:             5.10.207
	I0203 12:28:33.905846   13136 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Operating System:           linux
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Architecture:               amd64
	I0203 12:28:33.905846   13136 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0203 12:28:33.906861   13136 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0203 12:28:33.906861   13136 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0203 12:28:33.906861   13136 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0203 12:28:33.906861   13136 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0203 12:28:33.906861   13136 command_runner.go:130] >   default                     busybox-58667487b6-c66bf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0203 12:28:33.906861   13136 command_runner.go:130] >   kube-system                 kindnet-dc9wq               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0203 12:28:33.906861   13136 command_runner.go:130] >   kube-system                 kube-proxy-ggnq7            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0203 12:28:33.906861   13136 command_runner.go:130] > Allocated resources:
	I0203 12:28:33.906861   13136 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Resource           Requests   Limits
	I0203 12:28:33.906861   13136 command_runner.go:130] >   --------           --------   ------
	I0203 12:28:33.906861   13136 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0203 12:28:33.906861   13136 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0203 12:28:33.906861   13136 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0203 12:28:33.906861   13136 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0203 12:28:33.906861   13136 command_runner.go:130] > Events:
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0203 12:28:33.906861   13136 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-749300-m02 status is now: NodeHasSufficientMemory
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-749300-m02 status is now: NodeHasNoDiskPressure
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-749300-m02 status is now: NodeHasSufficientPID
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-749300-m02 event: Registered Node multinode-749300-m02 in Controller
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-749300-m02 status is now: NodeReady
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Normal  RegisteredNode           65s                node-controller  Node multinode-749300-m02 event: Registered Node multinode-749300-m02 in Controller
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Normal  NodeNotReady             15s                node-controller  Node multinode-749300-m02 status is now: NodeNotReady
	I0203 12:28:33.906861   13136 command_runner.go:130] > Name:               multinode-749300-m03
	I0203 12:28:33.906861   13136 command_runner.go:130] > Roles:              <none>
	I0203 12:28:33.906861   13136 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     kubernetes.io/hostname=multinode-749300-m03
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     kubernetes.io/os=linux
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     minikube.k8s.io/name=multinode-749300
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_03T12_22_58_0700
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0203 12:28:33.906861   13136 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0203 12:28:33.906861   13136 command_runner.go:130] > CreationTimestamp:  Mon, 03 Feb 2025 12:22:58 +0000
	I0203 12:28:33.906861   13136 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0203 12:28:33.906861   13136 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0203 12:28:33.906861   13136 command_runner.go:130] > Unschedulable:      false
	I0203 12:28:33.906861   13136 command_runner.go:130] > Lease:
	I0203 12:28:33.906861   13136 command_runner.go:130] >   HolderIdentity:  multinode-749300-m03
	I0203 12:28:33.906861   13136 command_runner.go:130] >   AcquireTime:     <unset>
	I0203 12:28:33.906861   13136 command_runner.go:130] >   RenewTime:       Mon, 03 Feb 2025 12:23:59 +0000
	I0203 12:28:33.906861   13136 command_runner.go:130] > Conditions:
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0203 12:28:33.906861   13136 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0203 12:28:33.906861   13136 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:33.906861   13136 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:33.906861   13136 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Ready            Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:33.906861   13136 command_runner.go:130] > Addresses:
	I0203 12:28:33.906861   13136 command_runner.go:130] >   InternalIP:  172.25.0.54
	I0203 12:28:33.906861   13136 command_runner.go:130] >   Hostname:    multinode-749300-m03
	I0203 12:28:33.906861   13136 command_runner.go:130] > Capacity:
	I0203 12:28:33.906861   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:33.906861   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:33.906861   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:33.906861   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:33.906861   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:33.906861   13136 command_runner.go:130] > Allocatable:
	I0203 12:28:33.906861   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:33.906861   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:33.906861   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:33.906861   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:33.907843   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:33.907843   13136 command_runner.go:130] > System Info:
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Machine ID:                 38d40ad4379a4ec5b47dd7ccdbdcfdd3
	I0203 12:28:33.907843   13136 command_runner.go:130] >   System UUID:                605d710b-5b92-ec4e-8d85-0f6c10e8d37a
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Boot ID:                    13f88b1f-ea06-4747-bc4f-774ad0edb09f
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Kernel Version:             5.10.207
	I0203 12:28:33.907843   13136 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Operating System:           linux
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Architecture:               amd64
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0203 12:28:33.907843   13136 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0203 12:28:33.907843   13136 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0203 12:28:33.907843   13136 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0203 12:28:33.907843   13136 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0203 12:28:33.907843   13136 command_runner.go:130] >   kube-system                 kindnet-bckxx       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0203 12:28:33.907843   13136 command_runner.go:130] >   kube-system                 kube-proxy-w8wrd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0203 12:28:33.907843   13136 command_runner.go:130] > Allocated resources:
	I0203 12:28:33.907843   13136 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Resource           Requests   Limits
	I0203 12:28:33.907843   13136 command_runner.go:130] >   --------           --------   ------
	I0203 12:28:33.907843   13136 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0203 12:28:33.907843   13136 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0203 12:28:33.907843   13136 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0203 12:28:33.907843   13136 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0203 12:28:33.907843   13136 command_runner.go:130] > Events:
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0203 12:28:33.907843   13136 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  Starting                 5m32s                  kube-proxy       
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientMemory
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientPID
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-749300-m03 status is now: NodeHasNoDiskPressure
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-749300-m03 status is now: NodeReady
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  CIDRAssignmentFailed     5m35s                  cidrAllocator    Node multinode-749300-m03 status is now: CIDRAssignmentFailed
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m35s (x2 over 5m35s)  kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientMemory
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m35s (x2 over 5m35s)  kubelet          Node multinode-749300-m03 status is now: NodeHasNoDiskPressure
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m35s (x2 over 5m35s)  kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientPID
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m35s                  kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  RegisteredNode           5m34s                  node-controller  Node multinode-749300-m03 event: Registered Node multinode-749300-m03 in Controller
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  NodeReady                5m20s                  kubelet          Node multinode-749300-m03 status is now: NodeReady
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  NodeNotReady             3m43s                  node-controller  Node multinode-749300-m03 status is now: NodeNotReady
	I0203 12:28:33.907843   13136 command_runner.go:130] >   Normal  RegisteredNode           65s                    node-controller  Node multinode-749300-m03 event: Registered Node multinode-749300-m03 in Controller
	I0203 12:28:33.919215   13136 logs.go:123] Gathering logs for kube-proxy [cf33452e7244] ...
	I0203 12:28:33.919215   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf33452e7244"
	I0203 12:28:33.949137   13136 command_runner.go:130] ! I0203 12:27:27.874759       1 server_linux.go:66] "Using iptables proxy"
	I0203 12:28:33.949250   13136 command_runner.go:130] ! E0203 12:27:28.000541       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:33.949250   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0203 12:28:33.949250   13136 command_runner.go:130] ! 	add table ip kube-proxy
	I0203 12:28:33.949250   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:33.949250   13136 command_runner.go:130] !  >
	I0203 12:28:33.949250   13136 command_runner.go:130] ! E0203 12:27:28.027381       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:33.949250   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0203 12:28:33.949250   13136 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0203 12:28:33.949353   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:33.949353   13136 command_runner.go:130] !  >
	I0203 12:28:33.949353   13136 command_runner.go:130] ! I0203 12:27:28.187333       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.12.244"]
	I0203 12:28:33.949353   13136 command_runner.go:130] ! E0203 12:27:28.189467       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0203 12:28:33.949353   13136 command_runner.go:130] ! I0203 12:27:28.571807       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0203 12:28:33.949442   13136 command_runner.go:130] ! I0203 12:27:28.573724       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0203 12:28:33.949473   13136 command_runner.go:130] ! I0203 12:27:28.574028       1 server_linux.go:170] "Using iptables Proxier"
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.580953       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.586727       1 server.go:497] "Version info" version="v1.32.1"
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.590708       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.619546       1 config.go:199] "Starting service config controller"
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.621538       1 config.go:105] "Starting endpoint slice config controller"
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.621733       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.623181       1 config.go:329] "Starting node config controller"
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.623915       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.626746       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.627120       1 shared_informer.go:320] Caches are synced for service config
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.722206       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0203 12:28:33.949501   13136 command_runner.go:130] ! I0203 12:27:28.724853       1 shared_informer.go:320] Caches are synced for node config
	I0203 12:28:33.951678   13136 logs.go:123] Gathering logs for kube-controller-manager [fa5ab1df8985] ...
	I0203 12:28:33.951678   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa5ab1df8985"
	I0203 12:28:33.982714   13136 command_runner.go:130] ! I0203 12:27:22.909691       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:33.982714   13136 command_runner.go:130] ! I0203 12:27:23.402652       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0203 12:28:33.982772   13136 command_runner.go:130] ! I0203 12:27:23.402986       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:33.982772   13136 command_runner.go:130] ! I0203 12:27:23.406564       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:33.982772   13136 command_runner.go:130] ! I0203 12:27:23.406976       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:33.982772   13136 command_runner.go:130] ! I0203 12:27:23.407714       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0203 12:28:33.982772   13136 command_runner.go:130] ! I0203 12:27:23.407940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:33.982772   13136 command_runner.go:130] ! I0203 12:27:26.898379       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0203 12:28:33.982772   13136 command_runner.go:130] ! I0203 12:27:26.903089       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0203 12:28:33.982948   13136 command_runner.go:130] ! I0203 12:27:26.920491       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0203 12:28:33.982948   13136 command_runner.go:130] ! I0203 12:27:26.921386       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0203 12:28:33.982948   13136 command_runner.go:130] ! I0203 12:27:26.921411       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0203 12:28:33.983011   13136 command_runner.go:130] ! I0203 12:27:26.927675       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0203 12:28:33.983060   13136 command_runner.go:130] ! I0203 12:27:26.928004       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0203 12:28:33.983060   13136 command_runner.go:130] ! I0203 12:27:26.928034       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0203 12:28:33.983100   13136 command_runner.go:130] ! I0203 12:27:26.930586       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0203 12:28:33.983100   13136 command_runner.go:130] ! I0203 12:27:26.930784       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0203 12:28:33.983100   13136 command_runner.go:130] ! I0203 12:27:26.930813       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0203 12:28:33.983100   13136 command_runner.go:130] ! I0203 12:27:26.933480       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0203 12:28:33.983160   13136 command_runner.go:130] ! I0203 12:27:26.933510       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0203 12:28:33.983160   13136 command_runner.go:130] ! I0203 12:27:26.933688       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0203 12:28:33.983160   13136 command_runner.go:130] ! I0203 12:27:26.937614       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0203 12:28:33.983221   13136 command_runner.go:130] ! I0203 12:27:26.937802       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0203 12:28:33.983221   13136 command_runner.go:130] ! I0203 12:27:26.937815       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0203 12:28:33.983552   13136 command_runner.go:130] ! I0203 12:27:26.941806       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0203 12:28:33.985136   13136 command_runner.go:130] ! I0203 12:27:26.942027       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0203 12:28:33.985193   13136 command_runner.go:130] ! I0203 12:27:26.942037       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0203 12:28:33.985236   13136 command_runner.go:130] ! W0203 12:27:26.985553       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0203 12:28:33.985236   13136 command_runner.go:130] ! I0203 12:27:27.000401       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0203 12:28:33.985236   13136 command_runner.go:130] ! I0203 12:27:27.000471       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0203 12:28:33.985236   13136 command_runner.go:130] ! I0203 12:27:27.002441       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0203 12:28:33.985236   13136 command_runner.go:130] ! I0203 12:27:27.002463       1 shared_informer.go:313] Waiting for caches to sync for node
	I0203 12:28:33.985236   13136 command_runner.go:130] ! I0203 12:27:27.005161       1 shared_informer.go:320] Caches are synced for tokens
	I0203 12:28:33.985236   13136 command_runner.go:130] ! I0203 12:27:27.005494       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0203 12:28:33.985335   13136 command_runner.go:130] ! I0203 12:27:27.005531       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0203 12:28:33.985335   13136 command_runner.go:130] ! I0203 12:27:27.006525       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0203 12:28:33.985335   13136 command_runner.go:130] ! I0203 12:27:27.006554       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0203 12:28:33.985335   13136 command_runner.go:130] ! I0203 12:27:27.006561       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0203 12:28:33.985335   13136 command_runner.go:130] ! I0203 12:27:27.018211       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0203 12:28:33.985335   13136 command_runner.go:130] ! I0203 12:27:27.020298       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:33.985447   13136 command_runner.go:130] ! I0203 12:27:27.020315       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0203 12:28:33.985447   13136 command_runner.go:130] ! I0203 12:27:27.020476       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:33.985447   13136 command_runner.go:130] ! I0203 12:27:27.020496       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0203 12:28:33.985447   13136 command_runner.go:130] ! I0203 12:27:27.020523       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0203 12:28:33.985447   13136 command_runner.go:130] ! I0203 12:27:27.020531       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0203 12:28:33.985566   13136 command_runner.go:130] ! I0203 12:27:27.035455       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0203 12:28:33.985566   13136 command_runner.go:130] ! I0203 12:27:27.035474       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0203 12:28:33.985634   13136 command_runner.go:130] ! I0203 12:27:27.036405       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0203 12:28:33.985634   13136 command_runner.go:130] ! I0203 12:27:27.036423       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0203 12:28:33.985676   13136 command_runner.go:130] ! I0203 12:27:27.036035       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0203 12:28:33.985676   13136 command_runner.go:130] ! I0203 12:27:27.044089       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0203 12:28:33.985676   13136 command_runner.go:130] ! I0203 12:27:27.044099       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0203 12:28:33.985676   13136 command_runner.go:130] ! I0203 12:27:27.055692       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0203 12:28:33.986203   13136 command_runner.go:130] ! I0203 12:27:27.056054       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0203 12:28:33.986325   13136 command_runner.go:130] ! I0203 12:27:27.056069       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0203 12:28:33.986325   13136 command_runner.go:130] ! I0203 12:27:27.078626       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0203 12:28:33.986325   13136 command_runner.go:130] ! I0203 12:27:27.078816       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0203 12:28:33.986325   13136 command_runner.go:130] ! I0203 12:27:27.078939       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0203 12:28:33.986325   13136 command_runner.go:130] ! I0203 12:27:27.078953       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0203 12:28:33.986452   13136 command_runner.go:130] ! I0203 12:27:27.092379       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0203 12:28:33.986452   13136 command_runner.go:130] ! I0203 12:27:27.092403       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0203 12:28:33.986452   13136 command_runner.go:130] ! I0203 12:27:27.092472       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:33.986565   13136 command_runner.go:130] ! I0203 12:27:27.093806       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0203 12:28:33.986565   13136 command_runner.go:130] ! I0203 12:27:27.094076       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0203 12:28:33.986565   13136 command_runner.go:130] ! I0203 12:27:27.094201       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:33.986565   13136 command_runner.go:130] ! I0203 12:27:27.094716       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0203 12:28:33.986565   13136 command_runner.go:130] ! I0203 12:27:27.095015       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:33.986679   13136 command_runner.go:130] ! I0203 12:27:27.095085       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:33.986679   13136 command_runner.go:130] ! I0203 12:27:27.095525       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0203 12:28:33.986679   13136 command_runner.go:130] ! I0203 12:27:27.095975       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0203 12:28:33.986679   13136 command_runner.go:130] ! I0203 12:27:27.095995       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0203 12:28:33.986679   13136 command_runner.go:130] ! I0203 12:27:27.096141       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:33.987114   13136 command_runner.go:130] ! I0203 12:27:27.105052       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0203 12:28:33.987171   13136 command_runner.go:130] ! I0203 12:27:27.108021       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0203 12:28:33.987171   13136 command_runner.go:130] ! I0203 12:27:27.108044       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0203 12:28:33.987171   13136 command_runner.go:130] ! I0203 12:27:27.108849       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0203 12:28:33.987238   13136 command_runner.go:130] ! I0203 12:27:27.111028       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0203 12:28:33.987238   13136 command_runner.go:130] ! I0203 12:27:27.111046       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0203 12:28:33.987271   13136 command_runner.go:130] ! I0203 12:27:27.178113       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0203 12:28:33.987295   13136 command_runner.go:130] ! I0203 12:27:27.178273       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0203 12:28:33.987295   13136 command_runner.go:130] ! I0203 12:27:27.181884       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0203 12:28:33.987295   13136 command_runner.go:130] ! I0203 12:27:27.182308       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0203 12:28:33.987295   13136 command_runner.go:130] ! I0203 12:27:27.182384       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0203 12:28:33.987295   13136 command_runner.go:130] ! I0203 12:27:27.182422       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0203 12:28:33.987295   13136 command_runner.go:130] ! I0203 12:27:27.220586       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0203 12:28:33.987397   13136 command_runner.go:130] ! I0203 12:27:27.220908       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0203 12:28:33.987397   13136 command_runner.go:130] ! I0203 12:27:27.221122       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0203 12:28:33.987397   13136 command_runner.go:130] ! I0203 12:27:27.254107       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0203 12:28:33.987397   13136 command_runner.go:130] ! I0203 12:27:27.259526       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0203 12:28:33.987397   13136 command_runner.go:130] ! I0203 12:27:27.259566       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0203 12:28:33.987397   13136 command_runner.go:130] ! I0203 12:27:27.259616       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0203 12:28:33.987509   13136 command_runner.go:130] ! I0203 12:27:27.259642       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0203 12:28:33.987532   13136 command_runner.go:130] ! W0203 12:27:27.259665       1 shared_informer.go:597] resyncPeriod 16h18m36.581327018s is smaller than resyncCheckPeriod 16h18m48.925429448s and the informer has already started. Changing it to 16h18m48.925429448s
	I0203 12:28:33.987532   13136 command_runner.go:130] ! I0203 12:27:27.259798       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0203 12:28:33.987532   13136 command_runner.go:130] ! I0203 12:27:27.259831       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0203 12:28:33.987532   13136 command_runner.go:130] ! I0203 12:27:27.259851       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0203 12:28:33.987639   13136 command_runner.go:130] ! I0203 12:27:27.259880       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0203 12:28:33.987639   13136 command_runner.go:130] ! I0203 12:27:27.259900       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0203 12:28:33.987639   13136 command_runner.go:130] ! I0203 12:27:27.259918       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0203 12:28:33.987639   13136 command_runner.go:130] ! I0203 12:27:27.259931       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0203 12:28:33.987639   13136 command_runner.go:130] ! I0203 12:27:27.259951       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0203 12:28:33.987748   13136 command_runner.go:130] ! I0203 12:27:27.259973       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0203 12:28:33.987748   13136 command_runner.go:130] ! I0203 12:27:27.259996       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0203 12:28:33.987748   13136 command_runner.go:130] ! I0203 12:27:27.260019       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0203 12:28:33.987748   13136 command_runner.go:130] ! I0203 12:27:27.260033       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0203 12:28:33.987864   13136 command_runner.go:130] ! W0203 12:27:27.260043       1 shared_informer.go:597] resyncPeriod 12h21m15.604254037s is smaller than resyncCheckPeriod 16h18m48.925429448s and the informer has already started. Changing it to 16h18m48.925429448s
	I0203 12:28:33.987864   13136 command_runner.go:130] ! I0203 12:27:27.260097       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0203 12:28:33.987864   13136 command_runner.go:130] ! I0203 12:27:27.260171       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0203 12:28:33.987864   13136 command_runner.go:130] ! I0203 12:27:27.260229       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0203 12:28:33.987864   13136 command_runner.go:130] ! I0203 12:27:27.260265       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0203 12:28:33.987864   13136 command_runner.go:130] ! I0203 12:27:27.260486       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0203 12:28:33.987864   13136 command_runner.go:130] ! I0203 12:27:27.260501       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:33.987984   13136 command_runner.go:130] ! I0203 12:27:27.260524       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0203 12:28:33.987984   13136 command_runner.go:130] ! I0203 12:27:27.267963       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0203 12:28:33.987984   13136 command_runner.go:130] ! I0203 12:27:27.267980       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0203 12:28:33.987984   13136 command_runner.go:130] ! I0203 12:27:27.268261       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0203 12:28:33.987984   13136 command_runner.go:130] ! I0203 12:27:27.268271       1 shared_informer.go:313] Waiting for caches to sync for job
	I0203 12:28:33.987984   13136 command_runner.go:130] ! I0203 12:27:27.275304       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0203 12:28:33.987984   13136 command_runner.go:130] ! I0203 12:27:27.275791       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0203 12:28:33.988097   13136 command_runner.go:130] ! I0203 12:27:27.275805       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0203 12:28:33.988097   13136 command_runner.go:130] ! I0203 12:27:27.282846       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0203 12:28:33.988097   13136 command_runner.go:130] ! I0203 12:27:27.285688       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0203 12:28:33.988097   13136 command_runner.go:130] ! I0203 12:27:27.285931       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0203 12:28:33.988097   13136 command_runner.go:130] ! I0203 12:27:27.285943       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0203 12:28:33.988202   13136 command_runner.go:130] ! I0203 12:27:27.285971       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0203 12:28:33.988202   13136 command_runner.go:130] ! I0203 12:27:27.285981       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0203 12:28:33.988202   13136 command_runner.go:130] ! I0203 12:27:27.294816       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0203 12:28:33.988202   13136 command_runner.go:130] ! I0203 12:27:27.294925       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0203 12:28:33.988202   13136 command_runner.go:130] ! I0203 12:27:27.294936       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0203 12:28:33.988202   13136 command_runner.go:130] ! I0203 12:27:27.318951       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0203 12:28:33.988202   13136 command_runner.go:130] ! I0203 12:27:27.319030       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0203 12:28:33.988315   13136 command_runner.go:130] ! I0203 12:27:27.319040       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0203 12:28:33.988315   13136 command_runner.go:130] ! I0203 12:27:27.355026       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0203 12:28:33.988315   13136 command_runner.go:130] ! I0203 12:27:27.355145       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0203 12:28:33.988315   13136 command_runner.go:130] ! I0203 12:27:27.355157       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0203 12:28:33.988315   13136 command_runner.go:130] ! I0203 12:27:27.502334       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0203 12:28:33.988315   13136 command_runner.go:130] ! I0203 12:27:27.502612       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:33.988315   13136 command_runner.go:130] ! I0203 12:27:27.503231       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0203 12:28:33.988315   13136 command_runner.go:130] ! I0203 12:27:27.503509       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0203 12:28:33.988427   13136 command_runner.go:130] ! I0203 12:27:27.601804       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0203 12:28:33.988427   13136 command_runner.go:130] ! I0203 12:27:27.601861       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0203 12:28:33.988427   13136 command_runner.go:130] ! I0203 12:27:27.702241       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0203 12:28:33.988427   13136 command_runner.go:130] ! I0203 12:27:27.702332       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0203 12:28:33.988427   13136 command_runner.go:130] ! I0203 12:27:27.702378       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0203 12:28:33.988427   13136 command_runner.go:130] ! I0203 12:27:27.702389       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0203 12:28:33.988537   13136 command_runner.go:130] ! I0203 12:27:27.752020       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0203 12:28:33.988537   13136 command_runner.go:130] ! I0203 12:27:27.752619       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0203 12:28:33.988537   13136 command_runner.go:130] ! I0203 12:27:27.752706       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0203 12:28:33.988537   13136 command_runner.go:130] ! I0203 12:27:27.803085       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0203 12:28:33.988537   13136 command_runner.go:130] ! I0203 12:27:27.803455       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0203 12:28:33.988537   13136 command_runner.go:130] ! I0203 12:27:27.803481       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0203 12:28:33.988537   13136 command_runner.go:130] ! I0203 12:27:27.855074       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0203 12:28:33.988650   13136 command_runner.go:130] ! I0203 12:27:27.855248       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0203 12:28:33.988650   13136 command_runner.go:130] ! I0203 12:27:27.855184       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0203 12:28:33.988650   13136 command_runner.go:130] ! I0203 12:27:27.855399       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0203 12:28:33.988650   13136 command_runner.go:130] ! I0203 12:27:27.906335       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0203 12:28:33.988650   13136 command_runner.go:130] ! I0203 12:27:27.906694       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0203 12:28:33.988650   13136 command_runner.go:130] ! I0203 12:27:27.906991       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0203 12:28:33.988650   13136 command_runner.go:130] ! I0203 12:27:27.907151       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0203 12:28:33.988765   13136 command_runner.go:130] ! I0203 12:27:27.952285       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0203 12:28:33.988765   13136 command_runner.go:130] ! I0203 12:27:27.952811       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0203 12:28:33.988765   13136 command_runner.go:130] ! I0203 12:27:27.953099       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0203 12:28:33.988765   13136 command_runner.go:130] ! I0203 12:27:28.007756       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0203 12:28:33.988765   13136 command_runner.go:130] ! I0203 12:27:28.008110       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0203 12:28:33.988765   13136 command_runner.go:130] ! I0203 12:27:28.008081       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0203 12:28:33.988765   13136 command_runner.go:130] ! I0203 12:27:28.008316       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0203 12:28:33.988870   13136 command_runner.go:130] ! I0203 12:27:28.056312       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0203 12:28:33.988870   13136 command_runner.go:130] ! I0203 12:27:28.059984       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0203 12:28:33.988870   13136 command_runner.go:130] ! I0203 12:27:28.060009       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0203 12:28:33.988870   13136 command_runner.go:130] ! I0203 12:27:28.076985       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:33.988870   13136 command_runner.go:130] ! I0203 12:27:28.123054       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300\" does not exist"
	I0203 12:28:33.988870   13136 command_runner.go:130] ! I0203 12:27:28.125466       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m02\" does not exist"
	I0203 12:28:33.988981   13136 command_runner.go:130] ! I0203 12:27:28.127487       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m03\" does not exist"
	I0203 12:28:33.988981   13136 command_runner.go:130] ! I0203 12:27:28.128305       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0203 12:28:33.988981   13136 command_runner.go:130] ! I0203 12:27:28.130715       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:33.989798   13136 command_runner.go:130] ! I0203 12:27:28.131611       1 shared_informer.go:320] Caches are synced for cronjob
	I0203 12:28:33.989864   13136 command_runner.go:130] ! I0203 12:27:28.137580       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0203 12:28:33.989864   13136 command_runner.go:130] ! I0203 12:27:28.142883       1 shared_informer.go:320] Caches are synced for TTL
	I0203 12:28:33.989909   13136 command_runner.go:130] ! I0203 12:27:28.155436       1 shared_informer.go:320] Caches are synced for daemon sets
	I0203 12:28:33.989909   13136 command_runner.go:130] ! I0203 12:27:28.169742       1 shared_informer.go:320] Caches are synced for crt configmap
	I0203 12:28:33.989909   13136 command_runner.go:130] ! I0203 12:27:28.178458       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0203 12:28:33.989947   13136 command_runner.go:130] ! I0203 12:27:28.179559       1 shared_informer.go:320] Caches are synced for job
	I0203 12:28:33.989947   13136 command_runner.go:130] ! I0203 12:27:28.184280       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0203 12:28:33.989947   13136 command_runner.go:130] ! I0203 12:27:28.184866       1 shared_informer.go:320] Caches are synced for endpoint
	I0203 12:28:33.990005   13136 command_runner.go:130] ! I0203 12:27:28.185203       1 shared_informer.go:320] Caches are synced for persistent volume
	I0203 12:28:33.990005   13136 command_runner.go:130] ! I0203 12:27:28.188183       1 shared_informer.go:320] Caches are synced for disruption
	I0203 12:28:33.990005   13136 command_runner.go:130] ! I0203 12:27:28.191185       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0203 12:28:33.990005   13136 command_runner.go:130] ! I0203 12:27:28.192463       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0203 12:28:33.990061   13136 command_runner.go:130] ! I0203 12:27:28.192932       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0203 12:28:33.990061   13136 command_runner.go:130] ! I0203 12:27:28.195813       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:33.990104   13136 command_runner.go:130] ! I0203 12:27:28.197022       1 shared_informer.go:320] Caches are synced for expand
	I0203 12:28:33.990104   13136 command_runner.go:130] ! I0203 12:27:28.197371       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0203 12:28:33.990104   13136 command_runner.go:130] ! I0203 12:27:28.203607       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0203 12:28:33.990104   13136 command_runner.go:130] ! I0203 12:27:28.205940       1 shared_informer.go:320] Caches are synced for node
	I0203 12:28:33.990104   13136 command_runner.go:130] ! I0203 12:27:28.206428       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0203 12:28:33.990104   13136 command_runner.go:130] ! I0203 12:27:28.206719       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0203 12:28:33.990104   13136 command_runner.go:130] ! I0203 12:27:28.206743       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0203 12:28:33.990202   13136 command_runner.go:130] ! I0203 12:27:28.206759       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0203 12:28:33.990202   13136 command_runner.go:130] ! I0203 12:27:28.207125       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.990202   13136 command_runner.go:130] ! I0203 12:27:28.207167       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.990258   13136 command_runner.go:130] ! I0203 12:27:28.207249       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.990258   13136 command_runner.go:130] ! I0203 12:27:28.207497       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0203 12:28:33.990258   13136 command_runner.go:130] ! I0203 12:27:28.212287       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0203 12:28:33.990258   13136 command_runner.go:130] ! I0203 12:27:28.212651       1 shared_informer.go:320] Caches are synced for taint
	I0203 12:28:33.990319   13136 command_runner.go:130] ! I0203 12:27:28.216545       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0203 12:28:33.990319   13136 command_runner.go:130] ! I0203 12:27:28.213230       1 shared_informer.go:320] Caches are synced for GC
	I0203 12:28:33.990319   13136 command_runner.go:130] ! I0203 12:27:28.220697       1 shared_informer.go:320] Caches are synced for PV protection
	I0203 12:28:33.990375   13136 command_runner.go:130] ! I0203 12:27:28.221685       1 shared_informer.go:320] Caches are synced for namespace
	I0203 12:28:33.990375   13136 command_runner.go:130] ! I0203 12:27:28.223956       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0203 12:28:33.990375   13136 command_runner.go:130] ! I0203 12:27:28.214977       1 shared_informer.go:320] Caches are synced for ephemeral
	I0203 12:28:33.990375   13136 command_runner.go:130] ! I0203 12:27:28.215855       1 shared_informer.go:320] Caches are synced for attach detach
	I0203 12:28:33.990375   13136 command_runner.go:130] ! I0203 12:27:28.229339       1 shared_informer.go:320] Caches are synced for deployment
	I0203 12:28:33.990436   13136 command_runner.go:130] ! I0203 12:27:28.231152       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:33.990436   13136 command_runner.go:130] ! I0203 12:27:28.240053       1 shared_informer.go:320] Caches are synced for stateful set
	I0203 12:28:33.990436   13136 command_runner.go:130] ! I0203 12:27:28.244571       1 shared_informer.go:320] Caches are synced for HPA
	I0203 12:28:33.990491   13136 command_runner.go:130] ! I0203 12:27:28.253632       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0203 12:28:33.990491   13136 command_runner.go:130] ! I0203 12:27:28.253905       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:33.990530   13136 command_runner.go:130] ! I0203 12:27:28.254335       1 shared_informer.go:320] Caches are synced for PVC protection
	I0203 12:28:33.990530   13136 command_runner.go:130] ! I0203 12:27:28.256579       1 shared_informer.go:320] Caches are synced for service account
	I0203 12:28:33.990530   13136 command_runner.go:130] ! I0203 12:27:28.261559       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:33.990530   13136 command_runner.go:130] ! I0203 12:27:28.272196       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.990530   13136 command_runner.go:130] ! I0203 12:27:28.278627       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m02"
	I0203 12:28:33.990620   13136 command_runner.go:130] ! I0203 12:27:28.278875       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m03"
	I0203 12:28:33.990676   13136 command_runner.go:130] ! I0203 12:27:28.279161       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300"
	I0203 12:28:33.990676   13136 command_runner.go:130] ! I0203 12:27:28.279427       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:33.990676   13136 command_runner.go:130] ! I0203 12:27:28.279877       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.990676   13136 command_runner.go:130] ! I0203 12:27:28.279830       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0203 12:28:33.990738   13136 command_runner.go:130] ! I0203 12:27:28.304983       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:33.990738   13136 command_runner.go:130] ! I0203 12:27:28.305231       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0203 12:28:33.990738   13136 command_runner.go:130] ! I0203 12:27:28.305564       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0203 12:28:33.990738   13136 command_runner.go:130] ! I0203 12:27:28.321623       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0203 12:28:33.990795   13136 command_runner.go:130] ! I0203 12:27:28.355620       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:33.990795   13136 command_runner.go:130] ! I0203 12:27:28.537851       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="345.769991ms"
	I0203 12:28:33.990795   13136 command_runner.go:130] ! I0203 12:27:28.538124       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="123.5µs"
	I0203 12:28:33.990856   13136 command_runner.go:130] ! I0203 12:27:28.549449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="358.01756ms"
	I0203 12:28:33.990856   13136 command_runner.go:130] ! I0203 12:27:28.551039       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="41.301µs"
	I0203 12:28:33.990856   13136 command_runner.go:130] ! I0203 12:27:38.365008       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.990910   13136 command_runner.go:130] ! I0203 12:28:10.033136       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.990910   13136 command_runner.go:130] ! I0203 12:28:10.034663       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:33.990910   13136 command_runner.go:130] ! I0203 12:28:10.065494       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.990971   13136 command_runner.go:130] ! I0203 12:28:13.309331       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:33.990971   13136 command_runner.go:130] ! I0203 12:28:18.332821       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.990971   13136 command_runner.go:130] ! I0203 12:28:18.352713       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.991025   13136 command_runner.go:130] ! I0203 12:28:18.408588       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="26.468372ms"
	I0203 12:28:33.991025   13136 command_runner.go:130] ! I0203 12:28:18.409083       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="46.101µs"
	I0203 12:28:33.991025   13136 command_runner.go:130] ! I0203 12:28:23.502598       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:33.991085   13136 command_runner.go:130] ! I0203 12:28:31.524388       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="21.544593ms"
	I0203 12:28:33.991085   13136 command_runner.go:130] ! I0203 12:28:31.524629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="171.802µs"
	I0203 12:28:33.991139   13136 command_runner.go:130] ! I0203 12:28:31.550980       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="91.601µs"
	I0203 12:28:33.991139   13136 command_runner.go:130] ! I0203 12:28:31.616132       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="36.896902ms"
	I0203 12:28:33.991139   13136 command_runner.go:130] ! I0203 12:28:31.618203       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="115.002µs"
	I0203 12:28:34.009905   13136 logs.go:123] Gathering logs for kindnet [fab2d9be6b5c] ...
	I0203 12:28:34.009905   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fab2d9be6b5c"
	I0203 12:28:34.048684   13136 command_runner.go:130] ! I0203 12:13:59.481747       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.048748   13136 command_runner.go:130] ! I0203 12:13:59.482211       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.048748   13136 command_runner.go:130] ! I0203 12:13:59.482302       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.048748   13136 command_runner.go:130] ! I0203 12:14:09.479387       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.048748   13136 command_runner.go:130] ! I0203 12:14:09.479438       1 main.go:301] handling current node
	I0203 12:28:34.048823   13136 command_runner.go:130] ! I0203 12:14:09.479457       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.048823   13136 command_runner.go:130] ! I0203 12:14:09.479464       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.048823   13136 command_runner.go:130] ! I0203 12:14:09.480145       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.048823   13136 command_runner.go:130] ! I0203 12:14:09.480233       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.048823   13136 command_runner.go:130] ! I0203 12:14:19.488038       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.048883   13136 command_runner.go:130] ! I0203 12:14:19.488073       1 main.go:301] handling current node
	I0203 12:28:34.048883   13136 command_runner.go:130] ! I0203 12:14:19.488090       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.048883   13136 command_runner.go:130] ! I0203 12:14:19.488096       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.048883   13136 command_runner.go:130] ! I0203 12:14:19.488279       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.048883   13136 command_runner.go:130] ! I0203 12:14:19.488286       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.048981   13136 command_runner.go:130] ! I0203 12:14:29.479983       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.048981   13136 command_runner.go:130] ! I0203 12:14:29.480097       1 main.go:301] handling current node
	I0203 12:28:34.048981   13136 command_runner.go:130] ! I0203 12:14:29.480118       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.049049   13136 command_runner.go:130] ! I0203 12:14:29.480126       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.049049   13136 command_runner.go:130] ! I0203 12:14:29.480690       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.049049   13136 command_runner.go:130] ! I0203 12:14:29.480801       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.049111   13136 command_runner.go:130] ! I0203 12:14:39.480046       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.049111   13136 command_runner.go:130] ! I0203 12:14:39.480207       1 main.go:301] handling current node
	I0203 12:28:34.049111   13136 command_runner.go:130] ! I0203 12:14:39.480229       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.049111   13136 command_runner.go:130] ! I0203 12:14:39.480240       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.049187   13136 command_runner.go:130] ! I0203 12:14:39.480703       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.049187   13136 command_runner.go:130] ! I0203 12:14:39.480794       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.049187   13136 command_runner.go:130] ! I0203 12:14:49.479153       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.049250   13136 command_runner.go:130] ! I0203 12:14:49.479261       1 main.go:301] handling current node
	I0203 12:28:34.049250   13136 command_runner.go:130] ! I0203 12:14:49.479283       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.049250   13136 command_runner.go:130] ! I0203 12:14:49.479292       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.049250   13136 command_runner.go:130] ! I0203 12:14:49.479491       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.049250   13136 command_runner.go:130] ! I0203 12:14:49.479575       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.049336   13136 command_runner.go:130] ! I0203 12:14:59.478982       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.049366   13136 command_runner.go:130] ! I0203 12:14:59.479132       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.049366   13136 command_runner.go:130] ! I0203 12:14:59.479435       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.049366   13136 command_runner.go:130] ! I0203 12:14:59.479519       1 main.go:301] handling current node
	I0203 12:28:34.049366   13136 command_runner.go:130] ! I0203 12:14:59.479535       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.049426   13136 command_runner.go:130] ! I0203 12:14:59.479541       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.049426   13136 command_runner.go:130] ! I0203 12:15:09.479541       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.049490   13136 command_runner.go:130] ! I0203 12:15:09.479593       1 main.go:301] handling current node
	I0203 12:28:34.049490   13136 command_runner.go:130] ! I0203 12:15:09.479613       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.049490   13136 command_runner.go:130] ! I0203 12:15:09.479621       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.049490   13136 command_runner.go:130] ! I0203 12:15:09.480303       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.049557   13136 command_runner.go:130] ! I0203 12:15:09.480382       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.049557   13136 command_runner.go:130] ! I0203 12:15:19.488389       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.049557   13136 command_runner.go:130] ! I0203 12:15:19.488489       1 main.go:301] handling current node
	I0203 12:28:34.049617   13136 command_runner.go:130] ! I0203 12:15:19.488509       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.049684   13136 command_runner.go:130] ! I0203 12:15:19.488517       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.049684   13136 command_runner.go:130] ! I0203 12:15:19.489046       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.049684   13136 command_runner.go:130] ! I0203 12:15:19.489142       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.049745   13136 command_runner.go:130] ! I0203 12:15:29.481025       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.049745   13136 command_runner.go:130] ! I0203 12:15:29.481131       1 main.go:301] handling current node
	I0203 12:28:34.049745   13136 command_runner.go:130] ! I0203 12:15:29.481151       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.049745   13136 command_runner.go:130] ! I0203 12:15:29.481158       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.049745   13136 command_runner.go:130] ! I0203 12:15:29.481350       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.049829   13136 command_runner.go:130] ! I0203 12:15:29.481373       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.049859   13136 command_runner.go:130] ! I0203 12:15:39.487726       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.049859   13136 command_runner.go:130] ! I0203 12:15:39.487893       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.049859   13136 command_runner.go:130] ! I0203 12:15:39.488092       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.049902   13136 command_runner.go:130] ! I0203 12:15:39.488105       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.049902   13136 command_runner.go:130] ! I0203 12:15:39.488232       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.049902   13136 command_runner.go:130] ! I0203 12:15:39.488259       1 main.go:301] handling current node
	I0203 12:28:34.049969   13136 command_runner.go:130] ! I0203 12:15:49.484117       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.049969   13136 command_runner.go:130] ! I0203 12:15:49.484177       1 main.go:301] handling current node
	I0203 12:28:34.049969   13136 command_runner.go:130] ! I0203 12:15:49.484234       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.049969   13136 command_runner.go:130] ! I0203 12:15:49.484314       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.050044   13136 command_runner.go:130] ! I0203 12:15:49.485204       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.050044   13136 command_runner.go:130] ! I0203 12:15:49.485392       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.050044   13136 command_runner.go:130] ! I0203 12:15:59.481092       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.050044   13136 command_runner.go:130] ! I0203 12:15:59.481195       1 main.go:301] handling current node
	I0203 12:28:34.050109   13136 command_runner.go:130] ! I0203 12:15:59.481218       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.050109   13136 command_runner.go:130] ! I0203 12:15:59.481226       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.050109   13136 command_runner.go:130] ! I0203 12:15:59.481484       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.050193   13136 command_runner.go:130] ! I0203 12:15:59.481510       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.050193   13136 command_runner.go:130] ! I0203 12:16:09.480009       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.050222   13136 command_runner.go:130] ! I0203 12:16:09.480236       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.050265   13136 command_runner.go:130] ! I0203 12:16:09.480645       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.050265   13136 command_runner.go:130] ! I0203 12:16:09.480840       1 main.go:301] handling current node
	I0203 12:28:34.050265   13136 command_runner.go:130] ! I0203 12:16:09.480969       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.050265   13136 command_runner.go:130] ! I0203 12:16:09.481255       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.050342   13136 command_runner.go:130] ! I0203 12:16:19.479435       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.050342   13136 command_runner.go:130] ! I0203 12:16:19.479557       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.050342   13136 command_runner.go:130] ! I0203 12:16:19.479760       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.050342   13136 command_runner.go:130] ! I0203 12:16:19.479977       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.050342   13136 command_runner.go:130] ! I0203 12:16:19.480328       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.050441   13136 command_runner.go:130] ! I0203 12:16:19.480522       1 main.go:301] handling current node
	I0203 12:28:34.050441   13136 command_runner.go:130] ! I0203 12:16:29.479113       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.050441   13136 command_runner.go:130] ! I0203 12:16:29.479221       1 main.go:301] handling current node
	I0203 12:28:34.050506   13136 command_runner.go:130] ! I0203 12:16:29.479267       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.050506   13136 command_runner.go:130] ! I0203 12:16:29.479321       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.050506   13136 command_runner.go:130] ! I0203 12:16:29.479572       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.050575   13136 command_runner.go:130] ! I0203 12:16:29.479670       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.050575   13136 command_runner.go:130] ! I0203 12:16:39.484562       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.050653   13136 command_runner.go:130] ! I0203 12:16:39.484671       1 main.go:301] handling current node
	I0203 12:28:34.050653   13136 command_runner.go:130] ! I0203 12:16:39.484693       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.050653   13136 command_runner.go:130] ! I0203 12:16:39.484700       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.050721   13136 command_runner.go:130] ! I0203 12:16:39.485166       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.050721   13136 command_runner.go:130] ! I0203 12:16:39.485259       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.050721   13136 command_runner.go:130] ! I0203 12:16:49.488261       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.050721   13136 command_runner.go:130] ! I0203 12:16:49.488416       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.050782   13136 command_runner.go:130] ! I0203 12:16:49.488709       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.050782   13136 command_runner.go:130] ! I0203 12:16:49.488783       1 main.go:301] handling current node
	I0203 12:28:34.050782   13136 command_runner.go:130] ! I0203 12:16:49.488801       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.050782   13136 command_runner.go:130] ! I0203 12:16:49.488807       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.050782   13136 command_runner.go:130] ! I0203 12:16:59.479138       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.050859   13136 command_runner.go:130] ! I0203 12:16:59.479218       1 main.go:301] handling current node
	I0203 12:28:34.050859   13136 command_runner.go:130] ! I0203 12:16:59.479312       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.050859   13136 command_runner.go:130] ! I0203 12:16:59.479448       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.050859   13136 command_runner.go:130] ! I0203 12:16:59.480031       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.050918   13136 command_runner.go:130] ! I0203 12:16:59.480132       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.050918   13136 command_runner.go:130] ! I0203 12:17:09.479412       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.050918   13136 command_runner.go:130] ! I0203 12:17:09.479454       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.050918   13136 command_runner.go:130] ! I0203 12:17:09.479652       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.051002   13136 command_runner.go:130] ! I0203 12:17:09.479680       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.051033   13136 command_runner.go:130] ! I0203 12:17:09.479774       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.051033   13136 command_runner.go:130] ! I0203 12:17:09.479785       1 main.go:301] handling current node
	I0203 12:28:34.051033   13136 command_runner.go:130] ! I0203 12:17:19.481248       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.051079   13136 command_runner.go:130] ! I0203 12:17:19.481299       1 main.go:301] handling current node
	I0203 12:28:34.051079   13136 command_runner.go:130] ! I0203 12:17:19.481317       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.051079   13136 command_runner.go:130] ! I0203 12:17:19.481324       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.051079   13136 command_runner.go:130] ! I0203 12:17:19.481727       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.051148   13136 command_runner.go:130] ! I0203 12:17:19.481754       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.051148   13136 command_runner.go:130] ! I0203 12:17:29.479244       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.051148   13136 command_runner.go:130] ! I0203 12:17:29.479364       1 main.go:301] handling current node
	I0203 12:28:34.051148   13136 command_runner.go:130] ! I0203 12:17:29.479384       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.051148   13136 command_runner.go:130] ! I0203 12:17:29.479392       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.051238   13136 command_runner.go:130] ! I0203 12:17:29.480340       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.051238   13136 command_runner.go:130] ! I0203 12:17:29.480488       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.051268   13136 command_runner.go:130] ! I0203 12:17:39.486004       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.051268   13136 command_runner.go:130] ! I0203 12:17:39.486109       1 main.go:301] handling current node
	I0203 12:28:34.051268   13136 command_runner.go:130] ! I0203 12:17:39.486129       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.051268   13136 command_runner.go:130] ! I0203 12:17:39.486137       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.051268   13136 command_runner.go:130] ! I0203 12:17:39.487056       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.051326   13136 command_runner.go:130] ! I0203 12:17:39.487145       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.051326   13136 command_runner.go:130] ! I0203 12:17:49.479174       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.051326   13136 command_runner.go:130] ! I0203 12:17:49.479407       1 main.go:301] handling current node
	I0203 12:28:34.051326   13136 command_runner.go:130] ! I0203 12:17:49.479529       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.051377   13136 command_runner.go:130] ! I0203 12:17:49.479564       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.051377   13136 command_runner.go:130] ! I0203 12:17:49.480448       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.051377   13136 command_runner.go:130] ! I0203 12:17:49.480489       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.051377   13136 command_runner.go:130] ! I0203 12:17:59.479178       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.051437   13136 command_runner.go:130] ! I0203 12:17:59.479464       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.051437   13136 command_runner.go:130] ! I0203 12:17:59.479683       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.051437   13136 command_runner.go:130] ! I0203 12:17:59.479843       1 main.go:301] handling current node
	I0203 12:28:34.051485   13136 command_runner.go:130] ! I0203 12:17:59.479900       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.051485   13136 command_runner.go:130] ! I0203 12:17:59.479909       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.051485   13136 command_runner.go:130] ! I0203 12:18:09.479760       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.051485   13136 command_runner.go:130] ! I0203 12:18:09.479855       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.051485   13136 command_runner.go:130] ! I0203 12:18:09.480291       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.051552   13136 command_runner.go:130] ! I0203 12:18:09.480340       1 main.go:301] handling current node
	I0203 12:28:34.051552   13136 command_runner.go:130] ! I0203 12:18:09.480365       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.051552   13136 command_runner.go:130] ! I0203 12:18:09.480374       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.051552   13136 command_runner.go:130] ! I0203 12:18:19.487177       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.053448   13136 command_runner.go:130] ! I0203 12:18:19.487393       1 main.go:301] handling current node
	I0203 12:28:34.053539   13136 command_runner.go:130] ! I0203 12:18:19.487478       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.053569   13136 command_runner.go:130] ! I0203 12:18:19.487578       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.053569   13136 command_runner.go:130] ! I0203 12:18:19.488002       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.053634   13136 command_runner.go:130] ! I0203 12:18:19.488201       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.053634   13136 command_runner.go:130] ! I0203 12:18:29.479665       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.053663   13136 command_runner.go:130] ! I0203 12:18:29.479790       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.053663   13136 command_runner.go:130] ! I0203 12:18:29.480229       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.053718   13136 command_runner.go:130] ! I0203 12:18:29.480333       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.053718   13136 command_runner.go:130] ! I0203 12:18:29.480694       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.053718   13136 command_runner.go:130] ! I0203 12:18:29.480800       1 main.go:301] handling current node
	I0203 12:28:34.053718   13136 command_runner.go:130] ! I0203 12:18:39.478894       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.053782   13136 command_runner.go:130] ! I0203 12:18:39.479048       1 main.go:301] handling current node
	I0203 12:28:34.053782   13136 command_runner.go:130] ! I0203 12:18:39.479069       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.053782   13136 command_runner.go:130] ! I0203 12:18:39.479077       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.053782   13136 command_runner.go:130] ! I0203 12:18:39.479735       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.053782   13136 command_runner.go:130] ! I0203 12:18:39.479846       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.053869   13136 command_runner.go:130] ! I0203 12:18:49.487084       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.053899   13136 command_runner.go:130] ! I0203 12:18:49.487121       1 main.go:301] handling current node
	I0203 12:28:34.053899   13136 command_runner.go:130] ! I0203 12:18:49.487139       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.053899   13136 command_runner.go:130] ! I0203 12:18:49.487146       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.053962   13136 command_runner.go:130] ! I0203 12:18:49.487825       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.053992   13136 command_runner.go:130] ! I0203 12:18:49.488251       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.053992   13136 command_runner.go:130] ! I0203 12:18:59.479844       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.054057   13136 command_runner.go:130] ! I0203 12:18:59.479986       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.054088   13136 command_runner.go:130] ! I0203 12:18:59.480763       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.054088   13136 command_runner.go:130] ! I0203 12:18:59.480852       1 main.go:301] handling current node
	I0203 12:28:34.054088   13136 command_runner.go:130] ! I0203 12:18:59.480911       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.054088   13136 command_runner.go:130] ! I0203 12:18:59.480921       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.054161   13136 command_runner.go:130] ! I0203 12:19:09.479931       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.054190   13136 command_runner.go:130] ! I0203 12:19:09.480043       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.054190   13136 command_runner.go:130] ! I0203 12:19:09.480242       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.054190   13136 command_runner.go:130] ! I0203 12:19:09.480487       1 main.go:301] handling current node
	I0203 12:28:34.054190   13136 command_runner.go:130] ! I0203 12:19:09.480506       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.054190   13136 command_runner.go:130] ! I0203 12:19:09.480516       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.054354   13136 command_runner.go:130] ! I0203 12:19:19.486529       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.054396   13136 command_runner.go:130] ! I0203 12:19:19.486564       1 main.go:301] handling current node
	I0203 12:28:34.054423   13136 command_runner.go:130] ! I0203 12:19:19.486583       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.054423   13136 command_runner.go:130] ! I0203 12:19:19.486590       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.054423   13136 command_runner.go:130] ! I0203 12:19:19.486994       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.054423   13136 command_runner.go:130] ! I0203 12:19:19.487009       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.054423   13136 command_runner.go:130] ! I0203 12:19:29.480898       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.054423   13136 command_runner.go:130] ! I0203 12:19:29.481006       1 main.go:301] handling current node
	I0203 12:28:34.054517   13136 command_runner.go:130] ! I0203 12:19:29.481028       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.054517   13136 command_runner.go:130] ! I0203 12:19:29.481037       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.054517   13136 command_runner.go:130] ! I0203 12:19:29.481233       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.054517   13136 command_runner.go:130] ! I0203 12:19:29.481256       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.054517   13136 command_runner.go:130] ! I0203 12:19:39.486219       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.054517   13136 command_runner.go:130] ! I0203 12:19:39.486253       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.054517   13136 command_runner.go:130] ! I0203 12:19:39.486535       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.054630   13136 command_runner.go:130] ! I0203 12:19:39.486547       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.054630   13136 command_runner.go:130] ! I0203 12:19:39.486661       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.054630   13136 command_runner.go:130] ! I0203 12:19:39.486668       1 main.go:301] handling current node
	I0203 12:28:34.054630   13136 command_runner.go:130] ! I0203 12:19:49.486894       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.054630   13136 command_runner.go:130] ! I0203 12:19:49.487004       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.054630   13136 command_runner.go:130] ! I0203 12:19:49.487855       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.054741   13136 command_runner.go:130] ! I0203 12:19:49.488255       1 main.go:301] handling current node
	I0203 12:28:34.054741   13136 command_runner.go:130] ! I0203 12:19:49.488415       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.054741   13136 command_runner.go:130] ! I0203 12:19:49.488578       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.054741   13136 command_runner.go:130] ! I0203 12:19:59.480029       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.054741   13136 command_runner.go:130] ! I0203 12:19:59.480068       1 main.go:301] handling current node
	I0203 12:28:34.054828   13136 command_runner.go:130] ! I0203 12:19:59.480087       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.054828   13136 command_runner.go:130] ! I0203 12:19:59.480095       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.054858   13136 command_runner.go:130] ! I0203 12:19:59.480976       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.054858   13136 command_runner.go:130] ! I0203 12:19:59.481279       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.054894   13136 command_runner.go:130] ! I0203 12:20:09.480108       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.054894   13136 command_runner.go:130] ! I0203 12:20:09.480217       1 main.go:301] handling current node
	I0203 12:28:34.054894   13136 command_runner.go:130] ! I0203 12:20:09.480237       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.054966   13136 command_runner.go:130] ! I0203 12:20:09.480245       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.054994   13136 command_runner.go:130] ! I0203 12:20:09.480661       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.054994   13136 command_runner.go:130] ! I0203 12:20:09.480744       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.054994   13136 command_runner.go:130] ! I0203 12:20:19.479758       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055065   13136 command_runner.go:130] ! I0203 12:20:19.480248       1 main.go:301] handling current node
	I0203 12:28:34.055065   13136 command_runner.go:130] ! I0203 12:20:19.480343       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055065   13136 command_runner.go:130] ! I0203 12:20:19.480356       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055065   13136 command_runner.go:130] ! I0203 12:20:19.480786       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055065   13136 command_runner.go:130] ! I0203 12:20:19.480803       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055132   13136 command_runner.go:130] ! I0203 12:20:29.479490       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055132   13136 command_runner.go:130] ! I0203 12:20:29.479617       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055132   13136 command_runner.go:130] ! I0203 12:20:29.480064       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055132   13136 command_runner.go:130] ! I0203 12:20:29.480169       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055132   13136 command_runner.go:130] ! I0203 12:20:29.480353       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055132   13136 command_runner.go:130] ! I0203 12:20:29.480368       1 main.go:301] handling current node
	I0203 12:28:34.055210   13136 command_runner.go:130] ! I0203 12:20:39.479641       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055210   13136 command_runner.go:130] ! I0203 12:20:39.479836       1 main.go:301] handling current node
	I0203 12:28:34.055210   13136 command_runner.go:130] ! I0203 12:20:39.479918       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055210   13136 command_runner.go:130] ! I0203 12:20:39.480224       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055274   13136 command_runner.go:130] ! I0203 12:20:39.480721       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055274   13136 command_runner.go:130] ! I0203 12:20:39.480751       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055274   13136 command_runner.go:130] ! I0203 12:20:49.479128       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055274   13136 command_runner.go:130] ! I0203 12:20:49.479242       1 main.go:301] handling current node
	I0203 12:28:34.055274   13136 command_runner.go:130] ! I0203 12:20:49.479263       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055274   13136 command_runner.go:130] ! I0203 12:20:49.479271       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055350   13136 command_runner.go:130] ! I0203 12:20:49.479687       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055350   13136 command_runner.go:130] ! I0203 12:20:49.479937       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055350   13136 command_runner.go:130] ! I0203 12:20:59.485967       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055350   13136 command_runner.go:130] ! I0203 12:20:59.486008       1 main.go:301] handling current node
	I0203 12:28:34.055350   13136 command_runner.go:130] ! I0203 12:20:59.486029       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055413   13136 command_runner.go:130] ! I0203 12:20:59.486037       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055413   13136 command_runner.go:130] ! I0203 12:20:59.486327       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055413   13136 command_runner.go:130] ! I0203 12:20:59.486342       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055413   13136 command_runner.go:130] ! I0203 12:21:09.479406       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055413   13136 command_runner.go:130] ! I0203 12:21:09.479537       1 main.go:301] handling current node
	I0203 12:28:34.055413   13136 command_runner.go:130] ! I0203 12:21:09.479560       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055489   13136 command_runner.go:130] ! I0203 12:21:09.479571       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055489   13136 command_runner.go:130] ! I0203 12:21:09.480561       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055489   13136 command_runner.go:130] ! I0203 12:21:09.480668       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055489   13136 command_runner.go:130] ! I0203 12:21:19.486059       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055552   13136 command_runner.go:130] ! I0203 12:21:19.486172       1 main.go:301] handling current node
	I0203 12:28:34.055552   13136 command_runner.go:130] ! I0203 12:21:19.486192       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055552   13136 command_runner.go:130] ! I0203 12:21:19.486199       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055593   13136 command_runner.go:130] ! I0203 12:21:19.486776       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055593   13136 command_runner.go:130] ! I0203 12:21:19.486913       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055593   13136 command_runner.go:130] ! I0203 12:21:29.479291       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055593   13136 command_runner.go:130] ! I0203 12:21:29.479421       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055634   13136 command_runner.go:130] ! I0203 12:21:29.480168       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055634   13136 command_runner.go:130] ! I0203 12:21:29.480268       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055634   13136 command_runner.go:130] ! I0203 12:21:29.480621       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055634   13136 command_runner.go:130] ! I0203 12:21:29.480720       1 main.go:301] handling current node
	I0203 12:28:34.055634   13136 command_runner.go:130] ! I0203 12:21:39.479561       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055693   13136 command_runner.go:130] ! I0203 12:21:39.479684       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055693   13136 command_runner.go:130] ! I0203 12:21:39.480019       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055693   13136 command_runner.go:130] ! I0203 12:21:39.480130       1 main.go:301] handling current node
	I0203 12:28:34.055693   13136 command_runner.go:130] ! I0203 12:21:39.480149       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055693   13136 command_runner.go:130] ! I0203 12:21:39.480157       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055693   13136 command_runner.go:130] ! I0203 12:21:49.485937       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055788   13136 command_runner.go:130] ! I0203 12:21:49.486015       1 main.go:301] handling current node
	I0203 12:28:34.055788   13136 command_runner.go:130] ! I0203 12:21:49.486511       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055788   13136 command_runner.go:130] ! I0203 12:21:49.486846       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055788   13136 command_runner.go:130] ! I0203 12:21:49.487441       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:21:49.487470       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:21:59.479224       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:21:59.479388       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:21:59.479615       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:21:59.479639       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:21:59.479828       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:21:59.479942       1 main.go:301] handling current node
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:09.479352       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:09.479745       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:09.480390       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:09.480426       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:09.480922       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:09.481129       1 main.go:301] handling current node
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:19.480040       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:19.480088       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:19.480938       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:19.480972       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:19.481966       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:19.482194       1 main.go:301] handling current node
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:29.479113       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:29.479222       1 main.go:301] handling current node
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:29.479243       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:29.479251       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:29.479605       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:29.479637       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:39.488770       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:39.488806       1 main.go:301] handling current node
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:39.488823       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:39.488830       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:39.489296       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:39.489449       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:49.479056       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:49.479097       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:49.479550       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:49.479661       1 main.go:301] handling current node
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:49.479679       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:49.479687       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:59.478931       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:59.479023       1 main.go:301] handling current node
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:59.479077       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:59.479136       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:59.479510       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:59.479604       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.055845   13136 command_runner.go:130] ! I0203 12:22:59.479991       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.0.54 Flags: [] Table: 0 Realm: 0} 
	I0203 12:28:34.056380   13136 command_runner.go:130] ! I0203 12:23:09.479836       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.056380   13136 command_runner.go:130] ! I0203 12:23:09.479965       1 main.go:301] handling current node
	I0203 12:28:34.056380   13136 command_runner.go:130] ! I0203 12:23:09.479985       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.056380   13136 command_runner.go:130] ! I0203 12:23:09.479997       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.056434   13136 command_runner.go:130] ! I0203 12:23:09.480363       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.056434   13136 command_runner.go:130] ! I0203 12:23:09.480514       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.056477   13136 command_runner.go:130] ! I0203 12:23:19.480167       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.056477   13136 command_runner.go:130] ! I0203 12:23:19.480217       1 main.go:301] handling current node
	I0203 12:28:34.056517   13136 command_runner.go:130] ! I0203 12:23:19.480239       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.056517   13136 command_runner.go:130] ! I0203 12:23:19.480245       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.056517   13136 command_runner.go:130] ! I0203 12:23:19.480628       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.056553   13136 command_runner.go:130] ! I0203 12:23:19.480750       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.056553   13136 command_runner.go:130] ! I0203 12:23:29.488733       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.056605   13136 command_runner.go:130] ! I0203 12:23:29.489234       1 main.go:301] handling current node
	I0203 12:28:34.056605   13136 command_runner.go:130] ! I0203 12:23:29.489474       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.056636   13136 command_runner.go:130] ! I0203 12:23:29.489946       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.056636   13136 command_runner.go:130] ! I0203 12:23:29.490535       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.056636   13136 command_runner.go:130] ! I0203 12:23:29.490635       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.056677   13136 command_runner.go:130] ! I0203 12:23:39.479240       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.056677   13136 command_runner.go:130] ! I0203 12:23:39.479359       1 main.go:301] handling current node
	I0203 12:28:34.056716   13136 command_runner.go:130] ! I0203 12:23:39.479382       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.056716   13136 command_runner.go:130] ! I0203 12:23:39.479391       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.056716   13136 command_runner.go:130] ! I0203 12:23:39.479635       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.056716   13136 command_runner.go:130] ! I0203 12:23:39.479662       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.056716   13136 command_runner.go:130] ! I0203 12:23:49.484665       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.056784   13136 command_runner.go:130] ! I0203 12:23:49.484760       1 main.go:301] handling current node
	I0203 12:28:34.056784   13136 command_runner.go:130] ! I0203 12:23:49.484814       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.056784   13136 command_runner.go:130] ! I0203 12:23:49.484827       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.056784   13136 command_runner.go:130] ! I0203 12:23:49.485522       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.056784   13136 command_runner.go:130] ! I0203 12:23:49.485609       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.056784   13136 command_runner.go:130] ! I0203 12:23:59.488178       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.056868   13136 command_runner.go:130] ! I0203 12:23:59.488328       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.056868   13136 command_runner.go:130] ! I0203 12:23:59.488725       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.056868   13136 command_runner.go:130] ! I0203 12:23:59.488825       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.056868   13136 command_runner.go:130] ! I0203 12:23:59.489199       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.056868   13136 command_runner.go:130] ! I0203 12:23:59.489288       1 main.go:301] handling current node
	I0203 12:28:34.056932   13136 command_runner.go:130] ! I0203 12:24:09.478924       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.056932   13136 command_runner.go:130] ! I0203 12:24:09.478990       1 main.go:301] handling current node
	I0203 12:28:34.056932   13136 command_runner.go:130] ! I0203 12:24:09.479043       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.056986   13136 command_runner.go:130] ! I0203 12:24:09.479072       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.056986   13136 command_runner.go:130] ! I0203 12:24:09.479342       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.056986   13136 command_runner.go:130] ! I0203 12:24:09.479511       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.056986   13136 command_runner.go:130] ! I0203 12:24:19.485161       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.056986   13136 command_runner.go:130] ! I0203 12:24:19.485331       1 main.go:301] handling current node
	I0203 12:28:34.057048   13136 command_runner.go:130] ! I0203 12:24:19.485367       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.057048   13136 command_runner.go:130] ! I0203 12:24:19.485388       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.057048   13136 command_runner.go:130] ! I0203 12:24:19.486434       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.057104   13136 command_runner.go:130] ! I0203 12:24:19.486547       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.057104   13136 command_runner.go:130] ! I0203 12:24:29.479544       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.057104   13136 command_runner.go:130] ! I0203 12:24:29.480058       1 main.go:301] handling current node
	I0203 12:28:34.057104   13136 command_runner.go:130] ! I0203 12:24:29.480294       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.057262   13136 command_runner.go:130] ! I0203 12:24:29.480571       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.057262   13136 command_runner.go:130] ! I0203 12:24:29.482395       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.057262   13136 command_runner.go:130] ! I0203 12:24:29.482495       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.057262   13136 command_runner.go:130] ! I0203 12:24:39.487057       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.057334   13136 command_runner.go:130] ! I0203 12:24:39.487164       1 main.go:301] handling current node
	I0203 12:28:34.057334   13136 command_runner.go:130] ! I0203 12:24:39.487184       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.057362   13136 command_runner.go:130] ! I0203 12:24:39.487192       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.057362   13136 command_runner.go:130] ! I0203 12:24:39.487371       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.057396   13136 command_runner.go:130] ! I0203 12:24:39.487395       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.057396   13136 command_runner.go:130] ! I0203 12:24:49.479049       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.057396   13136 command_runner.go:130] ! I0203 12:24:49.479126       1 main.go:301] handling current node
	I0203 12:28:34.057396   13136 command_runner.go:130] ! I0203 12:24:49.479266       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.057456   13136 command_runner.go:130] ! I0203 12:24:49.479354       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.057456   13136 command_runner.go:130] ! I0203 12:24:49.480131       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.057456   13136 command_runner.go:130] ! I0203 12:24:49.480242       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.057456   13136 command_runner.go:130] ! I0203 12:24:59.479305       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:34.057456   13136 command_runner.go:130] ! I0203 12:24:59.479727       1 main.go:301] handling current node
	I0203 12:28:34.057515   13136 command_runner.go:130] ! I0203 12:24:59.479826       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:34.057515   13136 command_runner.go:130] ! I0203 12:24:59.479839       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:34.057515   13136 command_runner.go:130] ! I0203 12:24:59.480314       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:34.057568   13136 command_runner.go:130] ! I0203 12:24:59.480509       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:34.075988   13136 logs.go:123] Gathering logs for dmesg ...
	I0203 12:28:34.075988   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 12:28:34.098483   13136 command_runner.go:130] > [Feb 3 12:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0203 12:28:34.098483   13136 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0203 12:28:34.098483   13136 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0203 12:28:34.098483   13136 command_runner.go:130] > [  +0.106774] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0203 12:28:34.098483   13136 command_runner.go:130] > [  +0.023238] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0203 12:28:34.099503   13136 command_runner.go:130] > [  +0.000004] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0203 12:28:34.099626   13136 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0203 12:28:34.099676   13136 command_runner.go:130] > [  +0.060292] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0203 12:28:34.099732   13136 command_runner.go:130] > [  +0.024825] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0203 12:28:34.099732   13136 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0203 12:28:34.099732   13136 command_runner.go:130] > [  +6.580601] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0203 12:28:34.099732   13136 command_runner.go:130] > [  +1.325226] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0203 12:28:34.099787   13136 command_runner.go:130] > [  +1.308770] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0203 12:28:34.099787   13136 command_runner.go:130] > [Feb 3 12:26] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0203 12:28:34.099787   13136 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0203 12:28:34.099787   13136 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0203 12:28:34.099846   13136 command_runner.go:130] > [ +44.595913] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	I0203 12:28:34.099846   13136 command_runner.go:130] > [  +0.095070] kauditd_printk_skb: 4 callbacks suppressed
	I0203 12:28:34.099846   13136 command_runner.go:130] > [  +0.080250] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	I0203 12:28:34.099892   13136 command_runner.go:130] > [Feb 3 12:27] systemd-fstab-generator[1026]: Ignoring "noauto" option for root device
	I0203 12:28:34.099936   13136 command_runner.go:130] > [  +0.111210] kauditd_printk_skb: 75 callbacks suppressed
	I0203 12:28:34.099970   13136 command_runner.go:130] > [  +0.499536] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	I0203 12:28:34.099989   13136 command_runner.go:130] > [  +0.200113] systemd-fstab-generator[1078]: Ignoring "noauto" option for root device
	I0203 12:28:34.099989   13136 command_runner.go:130] > [  +0.221690] systemd-fstab-generator[1092]: Ignoring "noauto" option for root device
	I0203 12:28:34.099989   13136 command_runner.go:130] > [  +2.970290] systemd-fstab-generator[1331]: Ignoring "noauto" option for root device
	I0203 12:28:34.099989   13136 command_runner.go:130] > [  +0.201836] systemd-fstab-generator[1343]: Ignoring "noauto" option for root device
	I0203 12:28:34.100075   13136 command_runner.go:130] > [  +0.192903] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	I0203 12:28:34.100075   13136 command_runner.go:130] > [  +0.251653] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	I0203 12:28:34.100075   13136 command_runner.go:130] > [  +0.851149] systemd-fstab-generator[1495]: Ignoring "noauto" option for root device
	I0203 12:28:34.100125   13136 command_runner.go:130] > [  +0.100990] kauditd_printk_skb: 206 callbacks suppressed
	I0203 12:28:34.100125   13136 command_runner.go:130] > [  +3.722313] systemd-fstab-generator[1639]: Ignoring "noauto" option for root device
	I0203 12:28:34.100125   13136 command_runner.go:130] > [  +1.365001] kauditd_printk_skb: 44 callbacks suppressed
	I0203 12:28:34.100160   13136 command_runner.go:130] > [  +5.747815] kauditd_printk_skb: 30 callbacks suppressed
	I0203 12:28:34.100160   13136 command_runner.go:130] > [  +3.773287] systemd-fstab-generator[2531]: Ignoring "noauto" option for root device
	I0203 12:28:34.100160   13136 command_runner.go:130] > [ +27.270277] kauditd_printk_skb: 70 callbacks suppressed
	I0203 12:28:36.611776   13136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 12:28:36.636811   13136 command_runner.go:130] > 1987
	I0203 12:28:36.636811   13136 api_server.go:72] duration metric: took 1m6.4297971s to wait for apiserver process to appear ...
	I0203 12:28:36.636811   13136 api_server.go:88] waiting for apiserver healthz status ...
	I0203 12:28:36.644395   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 12:28:36.671522   13136 command_runner.go:130] > 6c19e0a0ba9c
	I0203 12:28:36.672330   13136 logs.go:282] 1 containers: [6c19e0a0ba9c]
	I0203 12:28:36.679417   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 12:28:36.708730   13136 command_runner.go:130] > 09707a862965
	I0203 12:28:36.708842   13136 logs.go:282] 1 containers: [09707a862965]
	I0203 12:28:36.715321   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 12:28:36.741336   13136 command_runner.go:130] > edb5f00f1042
	I0203 12:28:36.741336   13136 command_runner.go:130] > fe91a8d012ae
	I0203 12:28:36.741336   13136 logs.go:282] 2 containers: [edb5f00f1042 fe91a8d012ae]
	I0203 12:28:36.749323   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 12:28:36.771595   13136 command_runner.go:130] > 2e43c2ecb4a9
	I0203 12:28:36.771595   13136 command_runner.go:130] > 88c40ca9aa3c
	I0203 12:28:36.773223   13136 logs.go:282] 2 containers: [2e43c2ecb4a9 88c40ca9aa3c]
	I0203 12:28:36.779219   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 12:28:36.805347   13136 command_runner.go:130] > cf33452e7244
	I0203 12:28:36.805347   13136 command_runner.go:130] > c6dc514e98f6
	I0203 12:28:36.806760   13136 logs.go:282] 2 containers: [cf33452e7244 c6dc514e98f6]
	I0203 12:28:36.813596   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 12:28:36.837656   13136 command_runner.go:130] > fa5ab1df8985
	I0203 12:28:36.837656   13136 command_runner.go:130] > 8ade10c0fb09
	I0203 12:28:36.839592   13136 logs.go:282] 2 containers: [fa5ab1df8985 8ade10c0fb09]
	I0203 12:28:36.847564   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0203 12:28:36.873872   13136 command_runner.go:130] > 644890f5738e
	I0203 12:28:36.874445   13136 command_runner.go:130] > fab2d9be6b5c
	I0203 12:28:36.874526   13136 logs.go:282] 2 containers: [644890f5738e fab2d9be6b5c]
	I0203 12:28:36.874625   13136 logs.go:123] Gathering logs for kindnet [644890f5738e] ...
	I0203 12:28:36.874625   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 644890f5738e"
	I0203 12:28:36.901490   13136 command_runner.go:130] ! I0203 12:27:27.922584       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0203 12:28:36.901635   13136 command_runner.go:130] ! I0203 12:27:27.925544       1 main.go:139] hostIP = 172.25.12.244
	I0203 12:28:36.901716   13136 command_runner.go:130] ! podIP = 172.25.12.244
	I0203 12:28:36.901716   13136 command_runner.go:130] ! I0203 12:27:27.925723       1 main.go:148] setting mtu 1500 for CNI 
	I0203 12:28:36.901716   13136 command_runner.go:130] ! I0203 12:27:27.925791       1 main.go:178] kindnetd IP family: "ipv4"
	I0203 12:28:36.901716   13136 command_runner.go:130] ! I0203 12:27:27.925960       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0203 12:28:36.901716   13136 command_runner.go:130] ! I0203 12:27:28.656536       1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
	I0203 12:28:36.901797   13136 command_runner.go:130] ! add table inet kindnet-network-policies
	I0203 12:28:36.901797   13136 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:36.901859   13136 command_runner.go:130] ! , skipping network policies
	I0203 12:28:36.901882   13136 command_runner.go:130] ! W0203 12:27:58.664159       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0203 12:28:36.901910   13136 command_runner.go:130] ! E0203 12:27:58.664461       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:08.665271       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:08.665332       1 main.go:301] handling current node
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:08.666606       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:08.666704       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:08.667036       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.25.8.35 Flags: [] Table: 0 Realm: 0} 
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:08.667510       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:08.667530       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:08.668238       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.0.54 Flags: [] Table: 0 Realm: 0} 
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:18.657872       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:18.658001       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:18.658271       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:18.658397       1 main.go:301] handling current node
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:18.658413       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:18.658420       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:28.657620       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:28.658189       1 main.go:301] handling current node
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:28.658424       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:28.658517       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:28.658702       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:36.901910   13136 command_runner.go:130] ! I0203 12:28:28.659037       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:36.905748   13136 logs.go:123] Gathering logs for Docker ...
	I0203 12:28:36.905748   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0203 12:28:36.938198   13136 command_runner.go:130] > Feb 03 12:25:59 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:36.938198   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:36.938198   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:36.938721   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:36.938721   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0203 12:28:36.938721   13136 command_runner.go:130] > Feb 03 12:26:00 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:36.938721   13136 command_runner.go:130] > Feb 03 12:26:00 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:36.938775   13136 command_runner.go:130] > Feb 03 12:26:00 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:36.938819   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0203 12:28:36.938819   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0203 12:28:36.938819   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:36.938819   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:36.938819   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:36.938819   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:36.938919   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0203 12:28:36.938919   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:36.938919   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:36.938919   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:36.938989   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0203 12:28:36.938989   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0203 12:28:36.938989   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:36.938989   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:36.938989   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:36.939058   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:36.939058   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0203 12:28:36.939058   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:36.939127   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:36.939127   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:36.939127   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0203 12:28:36.939127   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0203 12:28:36.939193   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0203 12:28:36.939193   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:36.939193   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:36.939258   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 systemd[1]: Starting Docker Application Container Engine...
	I0203 12:28:36.939258   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[651]: time="2025-02-03T12:26:45.380727146Z" level=info msg="Starting up"
	I0203 12:28:36.939258   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[651]: time="2025-02-03T12:26:45.381865516Z" level=info msg="containerd not running, starting managed containerd"
	I0203 12:28:36.939258   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[651]: time="2025-02-03T12:26:45.382773073Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=657
	I0203 12:28:36.939325   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.412550323Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0203 12:28:36.939325   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440135738Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0203 12:28:36.939325   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440206542Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0203 12:28:36.939395   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440329250Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0203 12:28:36.939395   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440352551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.939459   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441207804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:36.939459   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441394816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.939524   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441695635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:36.939524   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441819442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.939524   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441843144Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:36.939590   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441855545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.939590   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.442535887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.939590   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.443428142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.939655   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.446651543Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:36.939655   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.446752549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.939725   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.446913259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:36.939725   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.447005465Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0203 12:28:36.939789   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.447482194Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0203 12:28:36.939789   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.447592401Z" level=info msg="metadata content store policy set" policy=shared
	I0203 12:28:36.939789   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452471104Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0203 12:28:36.939789   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452580211Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0203 12:28:36.939883   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452605613Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0203 12:28:36.939883   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452624714Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0203 12:28:36.939883   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452641915Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0203 12:28:36.939950   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452717520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0203 12:28:36.939950   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453010238Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0203 12:28:36.939950   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453128145Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0203 12:28:36.939950   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453147046Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0203 12:28:36.940016   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453162147Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0203 12:28:36.940016   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453177448Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.940016   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453199850Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.940079   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453215851Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.940079   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453237552Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.940079   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453360460Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.940137   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453415663Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.940137   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453522870Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.940137   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453541271Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.940137   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453563972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940203   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453580773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940203   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453596174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940203   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453611675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940278   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453625276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940278   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453640377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940278   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453653878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940337   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453667779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940337   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453687080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940337   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453703481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940402   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453716682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940402   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453729883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940402   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453743884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940462   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453761485Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0203 12:28:36.940462   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453785086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940462   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453804587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940526   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453818788Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0203 12:28:36.940526   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453867591Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0203 12:28:36.940586   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453971798Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0203 12:28:36.940586   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454021201Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0203 12:28:36.940586   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454132008Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0203 12:28:36.940651   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454147409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.940712   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454163610Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0203 12:28:36.940712   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454175210Z" level=info msg="NRI interface is disabled by configuration."
	I0203 12:28:36.940712   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454622938Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0203 12:28:36.940712   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454857953Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0203 12:28:36.940775   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454980660Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0203 12:28:36.940775   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.455105168Z" level=info msg="containerd successfully booted in 0.044680s"
	I0203 12:28:36.940775   13136 command_runner.go:130] > Feb 03 12:26:46 multinode-749300 dockerd[651]: time="2025-02-03T12:26:46.439313185Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0203 12:28:36.940775   13136 command_runner.go:130] > Feb 03 12:26:46 multinode-749300 dockerd[651]: time="2025-02-03T12:26:46.630975852Z" level=info msg="Loading containers: start."
	I0203 12:28:36.940867   13136 command_runner.go:130] > Feb 03 12:26:46 multinode-749300 dockerd[651]: time="2025-02-03T12:26:46.949194693Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0203 12:28:36.940867   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.095120348Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0203 12:28:36.940931   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.212617937Z" level=info msg="Loading containers: done."
	I0203 12:28:36.940931   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.238410035Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0203 12:28:36.940931   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.238496541Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0203 12:28:36.940931   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.238529943Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0203 12:28:36.940993   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.239396503Z" level=info msg="Daemon has completed initialization"
	I0203 12:28:36.940993   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.279910027Z" level=info msg="API listen on /var/run/docker.sock"
	I0203 12:28:36.940993   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 systemd[1]: Started Docker Application Container Engine.
	I0203 12:28:36.940993   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.280075738Z" level=info msg="API listen on [::]:2376"
	I0203 12:28:36.941058   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.298017161Z" level=info msg="Processing signal 'terminated'"
	I0203 12:28:36.941058   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 systemd[1]: Stopping Docker Application Container Engine...
	I0203 12:28:36.941120   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.300466075Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0203 12:28:36.941120   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.301181479Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0203 12:28:36.941120   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.301265080Z" level=info msg="Daemon shutdown complete"
	I0203 12:28:36.941120   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.301434281Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0203 12:28:36.941186   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 systemd[1]: docker.service: Deactivated successfully.
	I0203 12:28:36.941186   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 systemd[1]: Stopped Docker Application Container Engine.
	I0203 12:28:36.941186   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 systemd[1]: Starting Docker Application Container Engine...
	I0203 12:28:36.941246   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:12.352956833Z" level=info msg="Starting up"
	I0203 12:28:36.941246   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:12.353893039Z" level=info msg="containerd not running, starting managed containerd"
	I0203 12:28:36.941246   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:12.356231552Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1107
	I0203 12:28:36.941312   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.387763834Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0203 12:28:36.941312   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415379693Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0203 12:28:36.941312   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415427893Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0203 12:28:36.941374   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415503993Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0203 12:28:36.941374   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415521293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.941374   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415552594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:36.941439   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415571594Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.941439   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415753695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:36.941505   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415875095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.941505   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415895996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:36.941505   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415907496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.941576   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415998596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.941576   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.416122597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.941576   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419383016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:36.941637   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419448316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:36.941701   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419602317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:36.941701   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419703417Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0203 12:28:36.941701   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419732118Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0203 12:28:36.941701   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419761418Z" level=info msg="metadata content store policy set" policy=shared
	I0203 12:28:36.941773   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420025019Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0203 12:28:36.941773   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420117020Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0203 12:28:36.941773   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420135220Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0203 12:28:36.941773   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420150320Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0203 12:28:36.941861   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420168320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0203 12:28:36.941879   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420220020Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0203 12:28:36.941879   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420554522Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0203 12:28:36.941879   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420715123Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0203 12:28:36.941945   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420811824Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0203 12:28:36.941945   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420833624Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0203 12:28:36.941945   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420853524Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.942028   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420879824Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.942057   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420897724Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.942093   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420912624Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.942117   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420991825Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.942117   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421007125Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.942117   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421021725Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.942199   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421034325Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0203 12:28:36.942199   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421059025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942226   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421075725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942262   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421090525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421104726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421118126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421132126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421150126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421166226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421188326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421206126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421218626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421231326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421244126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421262126Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421286927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421299927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421316127Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421657629Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421699929Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421719729Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421738629Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421749929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421767729Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421781429Z" level=info msg="NRI interface is disabled by configuration."
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422100631Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422251132Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422392333Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422418033Z" level=info msg="containerd successfully booted in 0.035603s"
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.403475080Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0203 12:28:36.942307   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.431623642Z" level=info msg="Loading containers: start."
	I0203 12:28:36.942837   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.675130644Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0203 12:28:36.942837   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.788922499Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0203 12:28:36.942837   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.907280980Z" level=info msg="Loading containers: done."
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.932910027Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.932994128Z" level=info msg="Daemon has completed initialization"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.970542044Z" level=info msg="API listen on /var/run/docker.sock"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.970691945Z" level=info msg="API listen on [::]:2376"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 systemd[1]: Started Docker Application Container Engine.
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Loaded network plugin cni"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Start cri-dockerd grpc backend"
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:19Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-58667487b6-zgvmd_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"efcd217a3204d8ee4b03ebb412109a32b1b008fc65b7434e2087e8fa5429c03b\""
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:19Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-v2gkp_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"26e5557dc32ce42e41eb095169017d71cd452b2e90ecede8972ab6dfa8c841ac\""
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.731892062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.732069764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.732104064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.732632967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.742524924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.742776225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.742902026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.743145327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.942935   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787449782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.943460   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787596483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.943460   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787637083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787820284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818198959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818289160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818451361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818555561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/264f9c1c2c05f544f10a0af503e7dfb16c8eaf7dab55a12d747c05df02b07807/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d8732fe7d2435b888ee9c1bdc8f366b2cd23fe7a47230b5e0b7e6e97547fb30e/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e2da6b5a5bd1b22ed0d0ef9ab7fd9a0874f1357443511e898b07fbae5f28d3d0/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc833a943f11f228aa4ef7daceca6bf4fd4096e22ee6354cc8afb177b0dc3db5/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.377130176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.378256483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.378462184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.378972087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.423087341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.424963652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.426916563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.427886269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.440196639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.440916544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.442061550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.442305352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.453876818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.454104020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.454340021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.454632323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.943524   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:25Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0203 12:28:36.944061   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474743418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.944061   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474833119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.944061   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474852519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944131   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474952220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944131   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502675379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.944131   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502746480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.944131   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502760180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502846980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507587807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507657108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507682008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507809209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c4912e7d3383ee7e383387115cfa625509cdb8edff08db473311607d723e4d67/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1eece224f54eb90d32ca17e53dec80b8ad8db63a733127cae7ce39832c944127/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c682ff8834bf472070d7ef8557ee1391dcfffd86e9b6a29c668eee4fe700e342/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010215801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010492502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010590603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010742104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.013544220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.013678021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.013710621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.014126823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145033877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145181177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145225278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145314878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:57.589562586Z" level=info msg="ignoring event" container=edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:57.590947498Z" level=info msg="shim disconnected" id=edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578 namespace=moby
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:57.591492803Z" level=warning msg="cleaning up after shim disconnected" id=edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578 namespace=moby
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:57.591599004Z" level=info msg="cleaning up dead shim" namespace=moby
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.013597299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.013673700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.013692300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.014212603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223402731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223571532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223587232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223671032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.236644911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.237659918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.237678218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.238007320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:28:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d290c79ddbf8dbaaae0ac6ae29ff1695c351eb244341bb86dfa66bd51e407af5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:28:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ac5f0bf5197cf2f2f9c600a6d9f77ea7775ba4c80a3a3c30272ea8dc42d9f4e2/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.741947665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.742072666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.944233   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.742088066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.945091   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.742520068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.945091   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783254697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:36.945091   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783521498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:36.945091   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783775700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.945091   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783932101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:36.973089   13136 logs.go:123] Gathering logs for kube-apiserver [6c19e0a0ba9c] ...
	I0203 12:28:36.973089   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c19e0a0ba9c"
	I0203 12:28:37.004489   13136 command_runner.go:130] ! W0203 12:27:22.209566       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0203 12:28:37.004489   13136 command_runner.go:130] ! I0203 12:27:22.212385       1 options.go:238] external host was not specified, using 172.25.12.244
	I0203 12:28:37.004489   13136 command_runner.go:130] ! I0203 12:27:22.215411       1 server.go:143] Version: v1.32.1
	I0203 12:28:37.004489   13136 command_runner.go:130] ! I0203 12:27:22.215519       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:37.004489   13136 command_runner.go:130] ! I0203 12:27:22.961695       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0203 12:28:37.004489   13136 command_runner.go:130] ! I0203 12:27:22.981400       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0203 12:28:37.004489   13136 command_runner.go:130] ! I0203 12:27:22.991076       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0203 12:28:37.004489   13136 command_runner.go:130] ! I0203 12:27:22.991179       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0203 12:28:37.004489   13136 command_runner.go:130] ! I0203 12:27:22.995374       1 instance.go:233] Using reconciler: lease
	I0203 12:28:37.005010   13136 command_runner.go:130] ! I0203 12:27:23.455051       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0203 12:28:37.005051   13136 command_runner.go:130] ! W0203 12:27:23.455431       1 genericapiserver.go:767] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005051   13136 command_runner.go:130] ! I0203 12:27:23.772863       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0203 12:28:37.005051   13136 command_runner.go:130] ! I0203 12:27:23.773118       1 apis.go:106] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.011206       1 apis.go:106] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.156938       1 apis.go:106] API group "resource.k8s.io" is not enabled, skipping.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.167831       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.167952       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.167965       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.168630       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.168731       1 genericapiserver.go:767] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.169810       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.170800       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.170918       1 genericapiserver.go:767] Skipping API autoscaling/v2beta1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.170928       1 genericapiserver.go:767] Skipping API autoscaling/v2beta2 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.172706       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.172818       1 genericapiserver.go:767] Skipping API batch/v1beta1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.173842       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.173955       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.173976       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.174699       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.174807       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.174815       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1alpha2 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.175562       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.175675       1 genericapiserver.go:767] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.177712       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.177817       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.177827       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.178337       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.178525       1 genericapiserver.go:767] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.178534       1 genericapiserver.go:767] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.179521       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0203 12:28:37.005095   13136 command_runner.go:130] ! W0203 12:27:24.179622       1 genericapiserver.go:767] Skipping API policy/v1beta1 because it has no resources.
	I0203 12:28:37.005095   13136 command_runner.go:130] ! I0203 12:27:24.181744       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0203 12:28:37.005622   13136 command_runner.go:130] ! W0203 12:27:24.181838       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005622   13136 command_runner.go:130] ! W0203 12:27:24.181848       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:37.005663   13136 command_runner.go:130] ! I0203 12:27:24.182574       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0203 12:28:37.005663   13136 command_runner.go:130] ! W0203 12:27:24.182612       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.182619       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.185237       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.185340       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.185438       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.187067       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.187189       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta3 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.187200       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.187204       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.193311       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.193504       1 genericapiserver.go:767] Skipping API apps/v1beta2 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.193516       1 genericapiserver.go:767] Skipping API apps/v1beta1 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.195828       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.195943       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.195952       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.196821       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.196925       1 genericapiserver.go:767] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.210087       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0203 12:28:37.005699   13136 command_runner.go:130] ! W0203 12:27:24.210106       1 genericapiserver.go:767] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.794572       1 secure_serving.go:213] Serving securely on [::]:8443
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.794794       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.795068       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.795407       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.802046       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.802388       1 local_available_controller.go:156] Starting LocalAvailability controller
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.802453       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.803591       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.803646       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0203 12:28:37.005699   13136 command_runner.go:130] ! I0203 12:27:24.803948       1 controller.go:78] Starting OpenAPI AggregationController
	I0203 12:28:37.006221   13136 command_runner.go:130] ! I0203 12:27:24.804549       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0203 12:28:37.006221   13136 command_runner.go:130] ! I0203 12:27:24.805072       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.805137       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.805149       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.805622       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.805888       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.806059       1 aggregator.go:169] waiting for initial CRD sync...
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.806071       1 cluster_authentication_trust_controller.go:462] Starting cluster_authentication_trust_controller controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.806336       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.815482       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.815778       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.857328       1 controller.go:142] Starting OpenAPI controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.857674       1 controller.go:90] Starting OpenAPI V3 controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.857889       1 naming_controller.go:294] Starting NamingConditionController
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.858090       1 establishing_controller.go:81] Starting EstablishingController
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.858264       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.858511       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.858696       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.805624       1 controller.go:119] Starting legacy_token_tracking_controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.859559       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.859779       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.859901       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.805642       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.805842       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.960247       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.962958       1 aggregator.go:171] initial CRD sync complete...
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.963020       1 autoregister_controller.go:144] Starting autoregister controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.963034       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.983465       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.983682       1 policy_source.go:240] refreshing policies
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:24.988524       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:25.002635       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0203 12:28:37.006259   13136 command_runner.go:130] ! I0203 12:27:25.006114       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0203 12:28:37.006787   13136 command_runner.go:130] ! I0203 12:27:25.007504       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0203 12:28:37.006787   13136 command_runner.go:130] ! I0203 12:27:25.021232       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0203 12:28:37.006836   13136 command_runner.go:130] ! I0203 12:27:25.021549       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0203 12:28:37.006836   13136 command_runner.go:130] ! I0203 12:27:25.021784       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0203 12:28:37.006836   13136 command_runner.go:130] ! I0203 12:27:25.040252       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0203 12:28:37.006836   13136 command_runner.go:130] ! I0203 12:27:25.063391       1 cache.go:39] Caches are synced for autoregister controller
	I0203 12:28:37.006836   13136 command_runner.go:130] ! I0203 12:27:25.063942       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0203 12:28:37.006905   13136 command_runner.go:130] ! I0203 12:27:25.064322       1 shared_informer.go:320] Caches are synced for configmaps
	I0203 12:28:37.006905   13136 command_runner.go:130] ! I0203 12:27:25.809340       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0203 12:28:37.006905   13136 command_runner.go:130] ! I0203 12:27:25.881836       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0203 12:28:37.006905   13136 command_runner.go:130] ! W0203 12:27:26.443758       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.25.12.244]
	I0203 12:28:37.006970   13136 command_runner.go:130] ! I0203 12:27:26.447833       1 controller.go:615] quota admission added evaluator for: endpoints
	I0203 12:28:37.006970   13136 command_runner.go:130] ! I0203 12:27:26.461396       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0203 12:28:37.006970   13136 command_runner.go:130] ! I0203 12:27:27.972522       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0203 12:28:37.007031   13136 command_runner.go:130] ! I0203 12:27:28.290141       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0203 12:28:37.007053   13136 command_runner.go:130] ! I0203 12:27:28.509424       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0203 12:28:37.007098   13136 command_runner.go:130] ! I0203 12:27:28.520726       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0203 12:28:37.007128   13136 command_runner.go:130] ! I0203 12:27:28.561004       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0203 12:28:37.015904   13136 logs.go:123] Gathering logs for etcd [09707a862965] ...
	I0203 12:28:37.015904   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09707a862965"
	I0203 12:28:37.043555   13136 command_runner.go:130] ! {"level":"warn","ts":"2025-02-03T12:27:21.807150Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0203 12:28:37.043992   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.807376Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.25.12.244:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.25.12.244:2380","--initial-cluster=multinode-749300=https://172.25.12.244:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.25.12.244:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.25.12.244:2380","--name=multinode-749300","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0203 12:28:37.043992   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.810076Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0203 12:28:37.043992   13136 command_runner.go:130] ! {"level":"warn","ts":"2025-02-03T12:27:21.810110Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0203 12:28:37.044122   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.810121Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.25.12.244:2380"]}
	I0203 12:28:37.044142   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.810165Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0203 12:28:37.044142   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.813162Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.25.12.244:2379"]}
	I0203 12:28:37.044243   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.815738Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-749300","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.25.12.244:2380"],"listen-peer-urls":["https://172.25.12.244:2380"],"advertise-client-urls":["https://172.25.12.244:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.12.244:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-c
luster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0203 12:28:37.044243   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.836502Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"19.618913ms"}
	I0203 12:28:37.044318   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.860600Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0203 12:28:37.044318   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.876663Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"bd3b09816c9d03a4","local-member-id":"aee9b6e79987349e","commit-index":2011}
	I0203 12:28:37.044318   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.879122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e switched to configuration voters=()"}
	I0203 12:28:37.044389   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.881202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became follower at term 2"}
	I0203 12:28:37.044389   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.882322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aee9b6e79987349e [peers: [], term: 2, commit: 2011, applied: 0, lastindex: 2011, lastterm: 2]"}
	I0203 12:28:37.044389   13136 command_runner.go:130] ! {"level":"warn","ts":"2025-02-03T12:27:21.896121Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0203 12:28:37.044455   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.900153Z","caller":"mvcc/kvstore.go:346","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1395}
	I0203 12:28:37.044455   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.903670Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":1746}
	I0203 12:28:37.044455   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.910428Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0203 12:28:37.044455   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.919884Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"aee9b6e79987349e","timeout":"7s"}
	I0203 12:28:37.044553   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.920678Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"aee9b6e79987349e"}
	I0203 12:28:37.044553   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.922572Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"aee9b6e79987349e","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	I0203 12:28:37.044553   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.923543Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	I0203 12:28:37.044619   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924198Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0203 12:28:37.044619   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924288Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0203 12:28:37.044619   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924338Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0203 12:28:37.044686   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e switched to configuration voters=(12603806138002519198)"}
	I0203 12:28:37.044686   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.925111Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bd3b09816c9d03a4","local-member-id":"aee9b6e79987349e","added-peer-id":"aee9b6e79987349e","added-peer-peer-urls":["https://172.25.1.53:2380"]}
	I0203 12:28:37.044686   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.926083Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bd3b09816c9d03a4","local-member-id":"aee9b6e79987349e","cluster-version":"3.5"}
	I0203 12:28:37.044686   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.926140Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0203 12:28:37.044757   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.926075Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0203 12:28:37.044824   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.931282Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.932289Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.25.12.244:2380"}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.932461Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.25.12.244:2380"}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.932990Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aee9b6e79987349e","initial-advertise-peer-urls":["https://172.25.12.244:2380"],"listen-peer-urls":["https://172.25.12.244:2380"],"advertise-client-urls":["https://172.25.12.244:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.12.244:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.933175Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e is starting a new election at term 2"}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became pre-candidate at term 2"}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e received MsgPreVoteResp from aee9b6e79987349e at term 2"}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became candidate at term 3"}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e received MsgVoteResp from aee9b6e79987349e at term 3"}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became leader at term 3"}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aee9b6e79987349e elected leader aee9b6e79987349e at term 3"}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.298589Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aee9b6e79987349e","local-member-attributes":"{Name:multinode-749300 ClientURLs:[https://172.25.12.244:2379]}","request-path":"/0/members/aee9b6e79987349e/attributes","cluster-id":"bd3b09816c9d03a4","publish-timeout":"7s"}
	I0203 12:28:37.044934   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.298815Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0203 12:28:37.045474   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.299061Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0203 12:28:37.045474   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.301663Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0203 12:28:37.045528   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.301847Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0203 12:28:37.045591   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.306842Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0203 12:28:37.045617   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.310094Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0203 12:28:37.045696   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.312993Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0203 12:28:37.046526   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.319087Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.12.244:2379"}
	I0203 12:28:37.054724   13136 logs.go:123] Gathering logs for coredns [fe91a8d012ae] ...
	I0203 12:28:37.055243   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe91a8d012ae"
	I0203 12:28:37.088930   13136 command_runner.go:130] > .:53
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3e8130cfa8e96169e54fdb81903f9b4680c96074b93281de316a617894d613269c265db78cbf1be00f04df6f27627d689838921ad115c7f1fadc26b632a43f17
	I0203 12:28:37.089005   13136 command_runner.go:130] > CoreDNS-1.11.3
	I0203 12:28:37.089005   13136 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 127.0.0.1:49376 - 54533 "HINFO IN 5545318737342419956.4498205497283969299. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.271697251s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:43143 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000594006s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:44943 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.183348242s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:36646 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.156236585s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:58135 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.085964402s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:55647 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000429704s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:43653 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000173402s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:39125 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000093801s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:43285 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000234602s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:49861 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157602s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:59079 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024886436s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:56014 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155402s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:49501 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115101s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:59809 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.029540479s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:45190 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184901s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:58561 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000207002s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:54547 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108101s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:52767 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140901s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:48199 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000275502s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:40769 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194202s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:56613 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000241303s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:36390 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000127501s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:49253 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150501s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:53291 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115601s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:37098 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000782s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:47927 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154002s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:49855 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156202s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:51176 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114201s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.1.2:45626 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156701s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:33142 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141402s
	I0203 12:28:37.089005   13136 command_runner.go:130] > [INFO] 10.244.0.3:36637 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000249602s
	I0203 12:28:37.089526   13136 command_runner.go:130] > [INFO] 10.244.0.3:34293 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135301s
	I0203 12:28:37.089566   13136 command_runner.go:130] > [INFO] 10.244.0.3:59245 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112701s
	I0203 12:28:37.089601   13136 command_runner.go:130] > [INFO] 10.244.1.2:56139 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200702s
	I0203 12:28:37.089601   13136 command_runner.go:130] > [INFO] 10.244.1.2:53567 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131301s
	I0203 12:28:37.089601   13136 command_runner.go:130] > [INFO] 10.244.1.2:55778 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000182502s
	I0203 12:28:37.089601   13136 command_runner.go:130] > [INFO] 10.244.1.2:53486 - 5 "PTR IN 1.0.25.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000163702s
	I0203 12:28:37.089601   13136 command_runner.go:130] > [INFO] 10.244.0.3:52745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191702s
	I0203 12:28:37.089601   13136 command_runner.go:130] > [INFO] 10.244.0.3:38587 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132301s
	I0203 12:28:37.089601   13136 command_runner.go:130] > [INFO] 10.244.0.3:53685 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078101s
	I0203 12:28:37.089601   13136 command_runner.go:130] > [INFO] 10.244.0.3:38406 - 5 "PTR IN 1.0.25.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000076301s
	I0203 12:28:37.089601   13136 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0203 12:28:37.089601   13136 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0203 12:28:37.092821   13136 logs.go:123] Gathering logs for kubelet ...
	I0203 12:28:37.092821   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:15 multinode-749300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: I0203 12:27:16.085338    1502 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: I0203 12:27:16.085444    1502 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: I0203 12:27:16.086383    1502 server.go:954] "Client rotation is on, will bootstrap in background"
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: E0203 12:27:16.086828    1502 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: I0203 12:27:16.848200    1552 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: I0203 12:27:16.848394    1552 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: I0203 12:27:16.848741    1552 server.go:954] "Client rotation is on, will bootstrap in background"
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: E0203 12:27:16.848794    1552 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:17 multinode-749300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.655843    1646 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.655920    1646 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.656491    1646 server.go:954] "Client rotation is on, will bootstrap in background"
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.660314    1646 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0203 12:28:37.125897   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.685411    1646 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:37.127052   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.712367    1646 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.712421    1646 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.719067    1646 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.719190    1646 server.go:841] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720010    1646 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720060    1646 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-749300","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720250    1646 topology_manager.go:138] "Creating topology manager with none policy"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720261    1646 container_manager_linux.go:304] "Creating device plugin manager"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720394    1646 state_mem.go:36] "Initialized new in-memory state store"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722746    1646 kubelet.go:446] "Attempting to sync node with API server"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722858    1646 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722878    1646 kubelet.go:352] "Adding apiserver pod source"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722889    1646 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.728476    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.728558    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.730384    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.730414    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.730516    1646 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="docker" version="27.4.0" apiVersion="v1"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.732095    1646 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.732504    1646 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.737572    1646 watchdog_linux.go:99] "Systemd watchdog is not enabled"
	I0203 12:28:37.127097   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.737778    1646 server.go:1287] "Started kubelet"
	I0203 12:28:37.127623   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.742490    1646 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.747263    1646 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.25.12.244:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-749300.1820b26d8c29f858  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-749300,UID:multinode-749300,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-749300,},FirstTimestamp:2025-02-03 12:27:19.73775164 +0000 UTC m=+0.175845113,LastTimestamp:2025-02-03 12:27:19.73775164 +0000 UTC m=+0.175845113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-7493
00,}"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.753450    1646 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.755438    1646 server.go:490] "Adding debug handlers to kubelet server"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.757330    1646 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.759063    1646 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.759618    1646 volume_manager.go:297] "Starting Kubelet Volume Manager"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.760084    1646 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.760301    1646 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-749300\" not found"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.763820    1646 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.766190    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="200ms"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.775750    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.775896    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.776304    1646 factory.go:221] Registration of the systemd container factory successfully
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.776423    1646 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.776477    1646 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.822393    1646 cpu_manager.go:221] "Starting CPU manager" policy="none"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.822414    1646 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.822433    1646 state_mem.go:36] "Initialized new in-memory state store"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823729    1646 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823782    1646 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823807    1646 policy_none.go:49] "None policy: Start"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823820    1646 memory_manager.go:186] "Starting memorymanager" policy="None"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823833    1646 state_mem.go:35] "Initializing new in-memory state store"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.824575    1646 state_mem.go:75] "Updated machine memory state"
	I0203 12:28:37.127660   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.827550    1646 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0203 12:28:37.128184   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.828214    1646 eviction_manager.go:189] "Eviction manager: starting control loop"
	I0203 12:28:37.128226   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.828323    1646 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0203 12:28:37.128226   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.834439    1646 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0203 12:28:37.128270   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.836223    1646 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I0203 12:28:37.128270   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.836276    1646 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-749300\" not found"
	I0203 12:28:37.128307   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.839763    1646 reconciler.go:26] "Reconciler: start to sync state"
	I0203 12:28:37.128307   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.849152    1646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0203 12:28:37.128351   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.851786    1646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0203 12:28:37.128351   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.851873    1646 status_manager.go:227] "Starting to sync pod status with apiserver"
	I0203 12:28:37.128389   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.852167    1646 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I0203 12:28:37.128422   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.852266    1646 kubelet.go:2388] "Starting kubelet main sync loop"
	I0203 12:28:37.128460   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.852425    1646 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0203 12:28:37.128532   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.857733    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:37.128566   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.857872    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.865017    1646 iptables.go:577] "Could not set up iptables canary" err=<
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.930098    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.931495    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.959594    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.959988    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ff01fa7d8c67a792cac128e6be46aba4b9713e4a6cd005178a2573c7a847c7a"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965523    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1b473818438dbd2e6a91783e24fae500384dbe88b88a3ed9dd8d9c8f4724a7a"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965561    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16d03cfd685dc52d880c67a5a5040dfd6dcf7d2477c368b0b221099fe19d0fc3"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965576    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8d9e598659ff21f0255dbdf0fe1e487760842b470492b0b4377fb2491bf3f17"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965587    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3c93fcfaa46c30cca46747853d168923992fa34e3ab48bd74f55818221180a9"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.966435    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.969099    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="400ms"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.969271    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efcd217a3204d8ee4b03ebb412109a32b1b008fc65b7434e2087e8fa5429c03b"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.994181    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26e5557dc32ce42e41eb095169017d71cd452b2e90ecede8972ab6dfa8c841ac"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.008325    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a166f3c8776d2abb8f173e76ba48d9aa5c71b04d34638145a7d22b947e0b1e16"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.024782    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb49b32ba0852c35cd9bd014b8dc9ccfc93a2c6a7d911bdd6baaba575c4e1d80"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.026552    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.027031    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046040    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-kubeconfig\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:37.128603   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046195    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:37.129129   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046258    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a4dc8a8db691940bb17375ec22c0921e-kubeconfig\") pod \"kube-scheduler-multinode-749300\" (UID: \"a4dc8a8db691940bb17375ec22c0921e\") " pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:37.129168   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046319    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/f85eb916773a482447e41aa40aaff233-etcd-certs\") pod \"etcd-multinode-749300\" (UID: \"f85eb916773a482447e41aa40aaff233\") " pod="kube-system/etcd-multinode-749300"
	I0203 12:28:37.129211   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046369    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20275825c8d44051c01f8d920b297acd-ca-certs\") pod \"kube-apiserver-multinode-749300\" (UID: \"20275825c8d44051c01f8d920b297acd\") " pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:37.129249   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046389    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20275825c8d44051c01f8d920b297acd-k8s-certs\") pod \"kube-apiserver-multinode-749300\" (UID: \"20275825c8d44051c01f8d920b297acd\") " pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:37.129320   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046407    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20275825c8d44051c01f8d920b297acd-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-749300\" (UID: \"20275825c8d44051c01f8d920b297acd\") " pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:37.129365   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046425    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-ca-certs\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:37.129404   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046445    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/f85eb916773a482447e41aa40aaff233-etcd-data\") pod \"etcd-multinode-749300\" (UID: \"f85eb916773a482447e41aa40aaff233\") " pod="kube-system/etcd-multinode-749300"
	I0203 12:28:37.129438   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046466    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-flexvolume-dir\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:37.129497   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046483    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-k8s-certs\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:37.129524   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.134568    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:37.129524   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.136458    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:37.129524   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.371298    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="800ms"
	I0203 12:28:37.129616   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.537888    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.538850    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: W0203 12:27:20.642530    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.642673    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: W0203 12:27:20.718728    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.718775    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: W0203 12:27:20.727487    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.727666    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: I0203 12:27:21.096615    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2da6b5a5bd1b22ed0d0ef9ab7fd9a0874f1357443511e898b07fbae5f28d3d0"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: I0203 12:27:21.117402    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc833a943f11f228aa4ef7daceca6bf4fd4096e22ee6354cc8afb177b0dc3db5"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: E0203 12:27:21.172766    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="1.6s"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: W0203 12:27:21.239099    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: E0203 12:27:21.239402    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: I0203 12:27:21.341008    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: E0203 12:27:21.342386    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.155943    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.168589    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.129688   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.184520    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.130216   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.192380    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.130256   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: I0203 12:27:22.944384    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:37.130256   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.220031    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.130307   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.221067    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.130307   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.221592    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.130343   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.222217    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.130343   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: E0203 12:27:24.222471    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.130406   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: E0203 12:27:24.222938    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.130451   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: E0203 12:27:24.223334    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:37.130451   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: I0203 12:27:24.962104    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:37.130500   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.072863    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-multinode-749300\" already exists" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:37.130500   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.072916    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:37.130500   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.096600    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-multinode-749300\" already exists" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:37.130500   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.096649    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:37.130577   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.100835    1646 kubelet_node_status.go:125] "Node was previously registered" node="multinode-749300"
	I0203 12:28:37.130577   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.101001    1646 kubelet_node_status.go:79] "Successfully registered node" node="multinode-749300"
	I0203 12:28:37.130577   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.101046    1646 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0203 12:28:37.130577   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.102196    1646 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0203 12:28:37.130650   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.103579    1646 setters.go:602] "Node became not ready" node="multinode-749300" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-03T12:27:25Z","lastTransitionTime":"2025-02-03T12:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0203 12:28:37.130650   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.123635    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-multinode-749300\" already exists" pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:37.130650   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.123696    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:37.130755   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.143136    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-749300\" already exists" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:37.130755   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.231645    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:37.130755   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.250920    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-749300\" already exists" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:37.130755   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.733100    1646 apiserver.go:52] "Watching apiserver"
	I0203 12:28:37.130755   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.740335    1646 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-749300" podUID="b18ba461-b225-4090-8341-159171502b52"
	I0203 12:28:37.130842   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.740880    1646 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-749300" podUID="c751851c-68ee-4c15-80ca-32642fcf2a5a"
	I0203 12:28:37.130842   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.741767    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.130919   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.743201    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.130919   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.768020    1646 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0203 12:28:37.130980   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.798228    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67c155d5-fb9b-42f5-8e64-865c44a5d4e6-xtables-lock\") pod \"kindnet-h6m57\" (UID: \"67c155d5-fb9b-42f5-8e64-865c44a5d4e6\") " pod="kube-system/kindnet-h6m57"
	I0203 12:28:37.130980   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799102    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4c991afa-7bb0-4d52-bded-22d68037b5ae-tmp\") pod \"storage-provisioner\" (UID: \"4c991afa-7bb0-4d52-bded-22d68037b5ae\") " pod="kube-system/storage-provisioner"
	I0203 12:28:37.131041   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799171    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1709b874-4fee-41f5-8d30-24912b2fa725-xtables-lock\") pod \"kube-proxy-9g92t\" (UID: \"1709b874-4fee-41f5-8d30-24912b2fa725\") " pod="kube-system/kube-proxy-9g92t"
	I0203 12:28:37.131105   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799205    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1709b874-4fee-41f5-8d30-24912b2fa725-lib-modules\") pod \"kube-proxy-9g92t\" (UID: \"1709b874-4fee-41f5-8d30-24912b2fa725\") " pod="kube-system/kube-proxy-9g92t"
	I0203 12:28:37.131105   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799246    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/67c155d5-fb9b-42f5-8e64-865c44a5d4e6-cni-cfg\") pod \"kindnet-h6m57\" (UID: \"67c155d5-fb9b-42f5-8e64-865c44a5d4e6\") " pod="kube-system/kindnet-h6m57"
	I0203 12:28:37.131190   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799264    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67c155d5-fb9b-42f5-8e64-865c44a5d4e6-lib-modules\") pod \"kindnet-h6m57\" (UID: \"67c155d5-fb9b-42f5-8e64-865c44a5d4e6\") " pod="kube-system/kindnet-h6m57"
	I0203 12:28:37.131190   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799337    1646 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:37.131190   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799426    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:37.131190   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.799386    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:37.131291   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.800808    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:26.300655438 +0000 UTC m=+6.738748911 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.812299    1646 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.812369    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.843057    1646 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.862699    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.862730    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.862793    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:26.362774296 +0000 UTC m=+6.800867869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.898492    1646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8703dd831250f30e213efd5fca131d7" path="/var/lib/kubelet/pods/a8703dd831250f30e213efd5fca131d7/volumes"
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.899802    1646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cea8016677ee73c66077ce584fb15354" path="/var/lib/kubelet/pods/cea8016677ee73c66077ce584fb15354/volumes"
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.952875    1646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-749300" podStartSLOduration=0.952857614 podStartE2EDuration="952.857614ms" podCreationTimestamp="2025-02-03 12:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-03 12:27:25.937443526 +0000 UTC m=+6.375537099" watchObservedRunningTime="2025-02-03 12:27:25.952857614 +0000 UTC m=+6.390951187"
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.974229    1646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-749300" podStartSLOduration=0.974210637 podStartE2EDuration="974.210637ms" podCreationTimestamp="2025-02-03 12:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-03 12:27:25.953477018 +0000 UTC m=+6.391570591" watchObservedRunningTime="2025-02-03 12:27:25.974210637 +0000 UTC m=+6.412304110"
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.303818    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.303893    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:27.303876335 +0000 UTC m=+7.741969908 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.405407    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.405530    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.405596    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:27.40557752 +0000 UTC m=+7.843670993 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.315813    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:37.131318   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.317831    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:29.317806871 +0000 UTC m=+9.755900344 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:37.131847   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.416628    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.416661    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.416713    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:29.41669654 +0000 UTC m=+9.854790013 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.861806    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.862570    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.336385    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.336563    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:33.336541991 +0000 UTC m=+13.774635464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.437576    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.437923    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.438074    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:33.438050975 +0000 UTC m=+13.876144448 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.853969    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.853720    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:31 multinode-749300 kubelet[1646]: E0203 12:27:31.852706    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:31 multinode-749300 kubelet[1646]: E0203 12:27:31.853391    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.131887   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.369187    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:37.132449   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.369409    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:41.369390703 +0000 UTC m=+21.807484276 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:37.132483   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.470103    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.470221    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.470291    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:41.470271952 +0000 UTC m=+21.908365425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.853533    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.854435    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:35 multinode-749300 kubelet[1646]: E0203 12:27:35.853643    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:35 multinode-749300 kubelet[1646]: E0203 12:27:35.854148    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:37 multinode-749300 kubelet[1646]: E0203 12:27:37.852924    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:37 multinode-749300 kubelet[1646]: E0203 12:27:37.853434    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:39 multinode-749300 kubelet[1646]: E0203 12:27:39.861767    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:39 multinode-749300 kubelet[1646]: E0203 12:27:39.862616    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.448061    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.448222    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:57.44820293 +0000 UTC m=+37.886296403 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.549425    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.549465    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.132521   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.549520    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:57.549504632 +0000 UTC m=+37.987598205 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.133045   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.852817    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.133123   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.853419    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:43 multinode-749300 kubelet[1646]: E0203 12:27:43.853585    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:43 multinode-749300 kubelet[1646]: E0203 12:27:43.854245    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:45 multinode-749300 kubelet[1646]: E0203 12:27:45.853520    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:45 multinode-749300 kubelet[1646]: E0203 12:27:45.857915    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:47 multinode-749300 kubelet[1646]: E0203 12:27:47.853864    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:47 multinode-749300 kubelet[1646]: E0203 12:27:47.854661    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:49 multinode-749300 kubelet[1646]: E0203 12:27:49.854481    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:49 multinode-749300 kubelet[1646]: E0203 12:27:49.855863    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:51 multinode-749300 kubelet[1646]: E0203 12:27:51.853472    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:51 multinode-749300 kubelet[1646]: E0203 12:27:51.854452    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:53 multinode-749300 kubelet[1646]: E0203 12:27:53.859668    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:53 multinode-749300 kubelet[1646]: E0203 12:27:53.860055    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.133160   13136 command_runner.go:130] > Feb 03 12:27:55 multinode-749300 kubelet[1646]: E0203 12:27:55.853633    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.133685   13136 command_runner.go:130] > Feb 03 12:27:55 multinode-749300 kubelet[1646]: E0203 12:27:55.854320    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.133685   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.494848    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:37.133685   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.494935    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:28:29.494917969 +0000 UTC m=+69.933011442 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:37.133788   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.595875    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.133811   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.595906    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.133870   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.595961    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:28:29.595942441 +0000 UTC m=+70.034036014 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:37.133870   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.853654    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.133946   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.854513    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.133946   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: I0203 12:27:57.906113    1646 scope.go:117] "RemoveContainer" containerID="a6484d4fc4d7f6ee26b1c4c1afc10f9bfba5b7f80f2181e9727f163daaf58ce6"
	I0203 12:28:37.133946   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: I0203 12:27:57.907138    1646 scope.go:117] "RemoveContainer" containerID="edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578"
	I0203 12:28:37.134019   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.910890    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(4c991afa-7bb0-4d52-bded-22d68037b5ae)\"" pod="kube-system/storage-provisioner" podUID="4c991afa-7bb0-4d52-bded-22d68037b5ae"
	I0203 12:28:37.134019   13136 command_runner.go:130] > Feb 03 12:27:59 multinode-749300 kubelet[1646]: E0203 12:27:59.855276    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.134088   13136 command_runner.go:130] > Feb 03 12:27:59 multinode-749300 kubelet[1646]: E0203 12:27:59.856164    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.134151   13136 command_runner.go:130] > Feb 03 12:28:01 multinode-749300 kubelet[1646]: E0203 12:28:01.853743    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.134151   13136 command_runner.go:130] > Feb 03 12:28:01 multinode-749300 kubelet[1646]: E0203 12:28:01.854049    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.134214   13136 command_runner.go:130] > Feb 03 12:28:03 multinode-749300 kubelet[1646]: E0203 12:28:03.853330    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.134280   13136 command_runner.go:130] > Feb 03 12:28:03 multinode-749300 kubelet[1646]: E0203 12:28:03.853968    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.134280   13136 command_runner.go:130] > Feb 03 12:28:05 multinode-749300 kubelet[1646]: E0203 12:28:05.853538    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.134349   13136 command_runner.go:130] > Feb 03 12:28:05 multinode-749300 kubelet[1646]: E0203 12:28:05.854181    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.134349   13136 command_runner.go:130] > Feb 03 12:28:07 multinode-749300 kubelet[1646]: E0203 12:28:07.853789    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.134426   13136 command_runner.go:130] > Feb 03 12:28:07 multinode-749300 kubelet[1646]: E0203 12:28:07.854093    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.134426   13136 command_runner.go:130] > Feb 03 12:28:09 multinode-749300 kubelet[1646]: E0203 12:28:09.860674    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:37.134491   13136 command_runner.go:130] > Feb 03 12:28:09 multinode-749300 kubelet[1646]: E0203 12:28:09.861267    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:37.134491   13136 command_runner.go:130] > Feb 03 12:28:10 multinode-749300 kubelet[1646]: I0203 12:28:10.015143    1646 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	I0203 12:28:37.134567   13136 command_runner.go:130] > Feb 03 12:28:10 multinode-749300 kubelet[1646]: I0203 12:28:10.852780    1646 scope.go:117] "RemoveContainer" containerID="edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578"
	I0203 12:28:37.134567   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]: I0203 12:28:19.875787    1646 scope.go:117] "RemoveContainer" containerID="ebc67da1b9e9ac10747758e3a934f19f5572ae8668d2a69f7d6ee1682387d02a"
	I0203 12:28:37.134567   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]: E0203 12:28:19.883953    1646 iptables.go:577] "Could not set up iptables canary" err=<
	I0203 12:28:37.134567   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0203 12:28:37.134635   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0203 12:28:37.134635   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0203 12:28:37.134635   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0203 12:28:37.134697   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]: I0203 12:28:19.923723    1646 scope.go:117] "RemoveContainer" containerID="e3efb81aa459abda7cc19b8607aa9d2bc56a837cc325e672683ffa4a9d05876b"
	I0203 12:28:37.134724   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 kubelet[1646]: I0203 12:28:30.439871    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d290c79ddbf8dbaaae0ac6ae29ff1695c351eb244341bb86dfa66bd51e407af5"
	I0203 12:28:37.134787   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 kubelet[1646]: I0203 12:28:30.451444    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac5f0bf5197cf2f2f9c600a6d9f77ea7775ba4c80a3a3c30272ea8dc42d9f4e2"
	I0203 12:28:37.180829   13136 logs.go:123] Gathering logs for describe nodes ...
	I0203 12:28:37.180829   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0203 12:28:37.386338   13136 command_runner.go:130] > Name:               multinode-749300
	I0203 12:28:37.386380   13136 command_runner.go:130] > Roles:              control-plane
	I0203 12:28:37.386433   13136 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0203 12:28:37.386433   13136 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0203 12:28:37.386433   13136 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0203 12:28:37.386474   13136 command_runner.go:130] >                     kubernetes.io/hostname=multinode-749300
	I0203 12:28:37.386474   13136 command_runner.go:130] >                     kubernetes.io/os=linux
	I0203 12:28:37.386474   13136 command_runner.go:130] >                     minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	I0203 12:28:37.386474   13136 command_runner.go:130] >                     minikube.k8s.io/name=multinode-749300
	I0203 12:28:37.386525   13136 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0203 12:28:37.386578   13136 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_03T12_04_56_0700
	I0203 12:28:37.386611   13136 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0203 12:28:37.386628   13136 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0203 12:28:37.386628   13136 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0203 12:28:37.386669   13136 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0203 12:28:37.386669   13136 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0203 12:28:37.386669   13136 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0203 12:28:37.386711   13136 command_runner.go:130] > CreationTimestamp:  Mon, 03 Feb 2025 12:04:52 +0000
	I0203 12:28:37.386711   13136 command_runner.go:130] > Taints:             <none>
	I0203 12:28:37.386711   13136 command_runner.go:130] > Unschedulable:      false
	I0203 12:28:37.386711   13136 command_runner.go:130] > Lease:
	I0203 12:28:37.386711   13136 command_runner.go:130] >   HolderIdentity:  multinode-749300
	I0203 12:28:37.386711   13136 command_runner.go:130] >   AcquireTime:     <unset>
	I0203 12:28:37.386711   13136 command_runner.go:130] >   RenewTime:       Mon, 03 Feb 2025 12:28:35 +0000
	I0203 12:28:37.386711   13136 command_runner.go:130] > Conditions:
	I0203 12:28:37.386805   13136 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0203 12:28:37.386844   13136 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0203 12:28:37.386844   13136 command_runner.go:130] >   MemoryPressure   False   Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0203 12:28:37.386903   13136 command_runner.go:130] >   DiskPressure     False   Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0203 12:28:37.386903   13136 command_runner.go:130] >   PIDPressure      False   Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0203 12:28:37.386957   13136 command_runner.go:130] >   Ready            True    Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:28:10 +0000   KubeletReady                 kubelet is posting ready status
	I0203 12:28:37.386957   13136 command_runner.go:130] > Addresses:
	I0203 12:28:37.387006   13136 command_runner.go:130] >   InternalIP:  172.25.12.244
	I0203 12:28:37.387006   13136 command_runner.go:130] >   Hostname:    multinode-749300
	I0203 12:28:37.387006   13136 command_runner.go:130] > Capacity:
	I0203 12:28:37.387052   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:37.387052   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:37.387052   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:37.387094   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:37.387094   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:37.387124   13136 command_runner.go:130] > Allocatable:
	I0203 12:28:37.387124   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:37.387124   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:37.387124   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:37.387181   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:37.387181   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:37.387181   13136 command_runner.go:130] > System Info:
	I0203 12:28:37.387215   13136 command_runner.go:130] >   Machine ID:                 aa9fbed762e844a2902d570b7040a1f0
	I0203 12:28:37.387215   13136 command_runner.go:130] >   System UUID:                69ffc0f0-a1d7-9e4e-97f3-ed54041f4203
	I0203 12:28:37.387215   13136 command_runner.go:130] >   Boot ID:                    d8bb3b39-ca1e-4113-9882-57d63502f9b2
	I0203 12:28:37.387215   13136 command_runner.go:130] >   Kernel Version:             5.10.207
	I0203 12:28:37.387215   13136 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0203 12:28:37.387294   13136 command_runner.go:130] >   Operating System:           linux
	I0203 12:28:37.387294   13136 command_runner.go:130] >   Architecture:               amd64
	I0203 12:28:37.387294   13136 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0203 12:28:37.387294   13136 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0203 12:28:37.387294   13136 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0203 12:28:37.387294   13136 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0203 12:28:37.387366   13136 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0203 12:28:37.387366   13136 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0203 12:28:37.387397   13136 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0203 12:28:37.387434   13136 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0203 12:28:37.387434   13136 command_runner.go:130] >   default                     busybox-58667487b6-zgvmd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0203 12:28:37.387478   13136 command_runner.go:130] >   kube-system                 coredns-668d6bf9bc-v2gkp                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0203 12:28:37.387478   13136 command_runner.go:130] >   kube-system                 etcd-multinode-749300                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         72s
	I0203 12:28:37.387527   13136 command_runner.go:130] >   kube-system                 kindnet-h6m57                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0203 12:28:37.387527   13136 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-749300             250m (12%)    0 (0%)      0 (0%)           0 (0%)         72s
	I0203 12:28:37.387580   13136 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-749300    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:37.387580   13136 command_runner.go:130] >   kube-system                 kube-proxy-9g92t                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:37.387580   13136 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-749300             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:37.387661   13136 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:37.387661   13136 command_runner.go:130] > Allocated resources:
	I0203 12:28:37.387661   13136 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0203 12:28:37.387661   13136 command_runner.go:130] >   Resource           Requests     Limits
	I0203 12:28:37.387661   13136 command_runner.go:130] >   --------           --------     ------
	I0203 12:28:37.387731   13136 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0203 12:28:37.387731   13136 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0203 12:28:37.387761   13136 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0203 12:28:37.387761   13136 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0203 12:28:37.387761   13136 command_runner.go:130] > Events:
	I0203 12:28:37.387799   13136 command_runner.go:130] >   Type     Reason                   Age                From             Message
	I0203 12:28:37.387799   13136 command_runner.go:130] >   ----     ------                   ----               ----             -------
	I0203 12:28:37.387828   13136 command_runner.go:130] >   Normal   Starting                 23m                kube-proxy       
	I0203 12:28:37.387828   13136 command_runner.go:130] >   Normal   Starting                 68s                kube-proxy       
	I0203 12:28:37.387828   13136 command_runner.go:130] >   Normal   Starting                 23m                kubelet          Starting kubelet.
	I0203 12:28:37.387828   13136 command_runner.go:130] >   Normal   NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	I0203 12:28:37.387828   13136 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	I0203 12:28:37.387899   13136 command_runner.go:130] >   Normal   NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	I0203 12:28:37.387899   13136 command_runner.go:130] >   Normal   NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:37.387899   13136 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    23m                kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	I0203 12:28:37.387899   13136 command_runner.go:130] >   Normal   NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:37.387969   13136 command_runner.go:130] >   Normal   NodeHasSufficientMemory  23m                kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	I0203 12:28:37.387969   13136 command_runner.go:130] >   Normal   NodeHasSufficientPID     23m                kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	I0203 12:28:37.387999   13136 command_runner.go:130] >   Normal   Starting                 23m                kubelet          Starting kubelet.
	I0203 12:28:37.388022   13136 command_runner.go:130] >   Normal   RegisteredNode           23m                node-controller  Node multinode-749300 event: Registered Node multinode-749300 in Controller
	I0203 12:28:37.388055   13136 command_runner.go:130] >   Normal   NodeReady                23m                kubelet          Node multinode-749300 status is now: NodeReady
	I0203 12:28:37.388055   13136 command_runner.go:130] >   Normal   Starting                 78s                kubelet          Starting kubelet.
	I0203 12:28:37.388055   13136 command_runner.go:130] >   Normal   NodeHasSufficientMemory  78s (x8 over 78s)  kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	I0203 12:28:37.388055   13136 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    78s (x8 over 78s)  kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	I0203 12:28:37.388115   13136 command_runner.go:130] >   Normal   NodeHasSufficientPID     78s (x7 over 78s)  kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	I0203 12:28:37.388115   13136 command_runner.go:130] >   Normal   NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:37.388115   13136 command_runner.go:130] >   Warning  Rebooted                 72s                kubelet          Node multinode-749300 has been rebooted, boot id: d8bb3b39-ca1e-4113-9882-57d63502f9b2
	I0203 12:28:37.388115   13136 command_runner.go:130] >   Normal   RegisteredNode           69s                node-controller  Node multinode-749300 event: Registered Node multinode-749300 in Controller
	I0203 12:28:37.388186   13136 command_runner.go:130] > Name:               multinode-749300-m02
	I0203 12:28:37.388186   13136 command_runner.go:130] > Roles:              <none>
	I0203 12:28:37.388186   13136 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0203 12:28:37.388216   13136 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0203 12:28:37.388238   13136 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0203 12:28:37.388238   13136 command_runner.go:130] >                     kubernetes.io/hostname=multinode-749300-m02
	I0203 12:28:37.388271   13136 command_runner.go:130] >                     kubernetes.io/os=linux
	I0203 12:28:37.388271   13136 command_runner.go:130] >                     minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	I0203 12:28:37.388271   13136 command_runner.go:130] >                     minikube.k8s.io/name=multinode-749300
	I0203 12:28:37.388271   13136 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0203 12:28:37.388332   13136 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_03T12_07_57_0700
	I0203 12:28:37.388332   13136 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0203 12:28:37.388332   13136 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0203 12:28:37.388332   13136 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0203 12:28:37.388332   13136 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0203 12:28:37.388402   13136 command_runner.go:130] > CreationTimestamp:  Mon, 03 Feb 2025 12:07:57 +0000
	I0203 12:28:37.388402   13136 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0203 12:28:37.388402   13136 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0203 12:28:37.388434   13136 command_runner.go:130] > Unschedulable:      false
	I0203 12:28:37.388434   13136 command_runner.go:130] > Lease:
	I0203 12:28:37.388434   13136 command_runner.go:130] >   HolderIdentity:  multinode-749300-m02
	I0203 12:28:37.388466   13136 command_runner.go:130] >   AcquireTime:     <unset>
	I0203 12:28:37.388466   13136 command_runner.go:130] >   RenewTime:       Mon, 03 Feb 2025 12:24:25 +0000
	I0203 12:28:37.388466   13136 command_runner.go:130] > Conditions:
	I0203 12:28:37.388466   13136 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0203 12:28:37.388466   13136 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0203 12:28:37.388527   13136 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:37.388527   13136 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:37.388577   13136 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:37.388577   13136 command_runner.go:130] >   Ready            Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:37.388615   13136 command_runner.go:130] > Addresses:
	I0203 12:28:37.388633   13136 command_runner.go:130] >   InternalIP:  172.25.8.35
	I0203 12:28:37.388633   13136 command_runner.go:130] >   Hostname:    multinode-749300-m02
	I0203 12:28:37.388633   13136 command_runner.go:130] > Capacity:
	I0203 12:28:37.388633   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:37.388673   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:37.388673   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:37.388673   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:37.388673   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:37.388673   13136 command_runner.go:130] > Allocatable:
	I0203 12:28:37.388723   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:37.388723   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:37.388723   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:37.388723   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:37.388723   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:37.388770   13136 command_runner.go:130] > System Info:
	I0203 12:28:37.388770   13136 command_runner.go:130] >   Machine ID:                 90c62936ba5d4d0aaeb17fe1abbb7ffd
	I0203 12:28:37.388770   13136 command_runner.go:130] >   System UUID:                4e05b2a5-08ff-3741-b04f-b8bc068a3e3b
	I0203 12:28:37.388770   13136 command_runner.go:130] >   Boot ID:                    4aec9dc0-92f8-4c4d-b16a-206948ca045d
	I0203 12:28:37.388770   13136 command_runner.go:130] >   Kernel Version:             5.10.207
	I0203 12:28:37.388819   13136 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0203 12:28:37.388819   13136 command_runner.go:130] >   Operating System:           linux
	I0203 12:28:37.388819   13136 command_runner.go:130] >   Architecture:               amd64
	I0203 12:28:37.388819   13136 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0203 12:28:37.388819   13136 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0203 12:28:37.388868   13136 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0203 12:28:37.388868   13136 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0203 12:28:37.388868   13136 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0203 12:28:37.388868   13136 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0203 12:28:37.388868   13136 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0203 12:28:37.388923   13136 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0203 12:28:37.388923   13136 command_runner.go:130] >   default                     busybox-58667487b6-c66bf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0203 12:28:37.388923   13136 command_runner.go:130] >   kube-system                 kindnet-dc9wq               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0203 12:28:37.388994   13136 command_runner.go:130] >   kube-system                 kube-proxy-ggnq7            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0203 12:28:37.388994   13136 command_runner.go:130] > Allocated resources:
	I0203 12:28:37.389025   13136 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0203 12:28:37.389025   13136 command_runner.go:130] >   Resource           Requests   Limits
	I0203 12:28:37.389025   13136 command_runner.go:130] >   --------           --------   ------
	I0203 12:28:37.389025   13136 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0203 12:28:37.389025   13136 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0203 12:28:37.389025   13136 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0203 12:28:37.389025   13136 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0203 12:28:37.389025   13136 command_runner.go:130] > Events:
	I0203 12:28:37.389025   13136 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0203 12:28:37.389094   13136 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0203 12:28:37.389094   13136 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0203 12:28:37.389094   13136 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-749300-m02 status is now: NodeHasSufficientMemory
	I0203 12:28:37.389094   13136 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-749300-m02 status is now: NodeHasNoDiskPressure
	I0203 12:28:37.389094   13136 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-749300-m02 status is now: NodeHasSufficientPID
	I0203 12:28:37.389158   13136 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:37.389158   13136 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-749300-m02 event: Registered Node multinode-749300-m02 in Controller
	I0203 12:28:37.389158   13136 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-749300-m02 status is now: NodeReady
	I0203 12:28:37.389228   13136 command_runner.go:130] >   Normal  RegisteredNode           69s                node-controller  Node multinode-749300-m02 event: Registered Node multinode-749300-m02 in Controller
	I0203 12:28:37.389228   13136 command_runner.go:130] >   Normal  NodeNotReady             19s                node-controller  Node multinode-749300-m02 status is now: NodeNotReady
	I0203 12:28:37.389228   13136 command_runner.go:130] > Name:               multinode-749300-m03
	I0203 12:28:37.389228   13136 command_runner.go:130] > Roles:              <none>
	I0203 12:28:37.389228   13136 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0203 12:28:37.389228   13136 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0203 12:28:37.389299   13136 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0203 12:28:37.389299   13136 command_runner.go:130] >                     kubernetes.io/hostname=multinode-749300-m03
	I0203 12:28:37.389299   13136 command_runner.go:130] >                     kubernetes.io/os=linux
	I0203 12:28:37.389299   13136 command_runner.go:130] >                     minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	I0203 12:28:37.389299   13136 command_runner.go:130] >                     minikube.k8s.io/name=multinode-749300
	I0203 12:28:37.389299   13136 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0203 12:28:37.389369   13136 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_03T12_22_58_0700
	I0203 12:28:37.389369   13136 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0203 12:28:37.389369   13136 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0203 12:28:37.389369   13136 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0203 12:28:37.389369   13136 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0203 12:28:37.389369   13136 command_runner.go:130] > CreationTimestamp:  Mon, 03 Feb 2025 12:22:58 +0000
	I0203 12:28:37.389439   13136 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0203 12:28:37.389439   13136 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0203 12:28:37.389439   13136 command_runner.go:130] > Unschedulable:      false
	I0203 12:28:37.389492   13136 command_runner.go:130] > Lease:
	I0203 12:28:37.389492   13136 command_runner.go:130] >   HolderIdentity:  multinode-749300-m03
	I0203 12:28:37.389492   13136 command_runner.go:130] >   AcquireTime:     <unset>
	I0203 12:28:37.389492   13136 command_runner.go:130] >   RenewTime:       Mon, 03 Feb 2025 12:23:59 +0000
	I0203 12:28:37.389524   13136 command_runner.go:130] > Conditions:
	I0203 12:28:37.389524   13136 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0203 12:28:37.389562   13136 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0203 12:28:37.389562   13136 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:37.389606   13136 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:37.389606   13136 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:37.389606   13136 command_runner.go:130] >   Ready            Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:37.389606   13136 command_runner.go:130] > Addresses:
	I0203 12:28:37.389655   13136 command_runner.go:130] >   InternalIP:  172.25.0.54
	I0203 12:28:37.389655   13136 command_runner.go:130] >   Hostname:    multinode-749300-m03
	I0203 12:28:37.389655   13136 command_runner.go:130] > Capacity:
	I0203 12:28:37.389655   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:37.389655   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:37.389705   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:37.389705   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:37.389705   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:37.389754   13136 command_runner.go:130] > Allocatable:
	I0203 12:28:37.389754   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:37.389754   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:37.389754   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:37.389754   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:37.389754   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:37.389806   13136 command_runner.go:130] > System Info:
	I0203 12:28:37.389806   13136 command_runner.go:130] >   Machine ID:                 38d40ad4379a4ec5b47dd7ccdbdcfdd3
	I0203 12:28:37.389806   13136 command_runner.go:130] >   System UUID:                605d710b-5b92-ec4e-8d85-0f6c10e8d37a
	I0203 12:28:37.389806   13136 command_runner.go:130] >   Boot ID:                    13f88b1f-ea06-4747-bc4f-774ad0edb09f
	I0203 12:28:37.389806   13136 command_runner.go:130] >   Kernel Version:             5.10.207
	I0203 12:28:37.389806   13136 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0203 12:28:37.389806   13136 command_runner.go:130] >   Operating System:           linux
	I0203 12:28:37.389877   13136 command_runner.go:130] >   Architecture:               amd64
	I0203 12:28:37.389877   13136 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0203 12:28:37.389877   13136 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0203 12:28:37.389877   13136 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0203 12:28:37.389877   13136 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0203 12:28:37.389877   13136 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0203 12:28:37.389946   13136 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0203 12:28:37.389977   13136 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0203 12:28:37.389977   13136 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0203 12:28:37.389977   13136 command_runner.go:130] >   kube-system                 kindnet-bckxx       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0203 12:28:37.390010   13136 command_runner.go:130] >   kube-system                 kube-proxy-w8wrd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0203 12:28:37.390010   13136 command_runner.go:130] > Allocated resources:
	I0203 12:28:37.390079   13136 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0203 12:28:37.390110   13136 command_runner.go:130] >   Resource           Requests   Limits
	I0203 12:28:37.390110   13136 command_runner.go:130] >   --------           --------   ------
	I0203 12:28:37.390110   13136 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0203 12:28:37.390142   13136 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0203 12:28:37.390142   13136 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0203 12:28:37.390142   13136 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0203 12:28:37.390142   13136 command_runner.go:130] > Events:
	I0203 12:28:37.390142   13136 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0203 12:28:37.390142   13136 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0203 12:28:37.390213   13136 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0203 12:28:37.390213   13136 command_runner.go:130] >   Normal  Starting                 5m35s                  kube-proxy       
	I0203 12:28:37.390243   13136 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientMemory
	I0203 12:28:37.390276   13136 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientPID
	I0203 12:28:37.390276   13136 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:37.390276   13136 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-749300-m03 status is now: NodeHasNoDiskPressure
	I0203 12:28:37.390276   13136 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-749300-m03 status is now: NodeReady
	I0203 12:28:37.390276   13136 command_runner.go:130] >   Normal  CIDRAssignmentFailed     5m39s                  cidrAllocator    Node multinode-749300-m03 status is now: CIDRAssignmentFailed
	I0203 12:28:37.390346   13136 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m39s (x2 over 5m39s)  kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientMemory
	I0203 12:28:37.390376   13136 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m39s (x2 over 5m39s)  kubelet          Node multinode-749300-m03 status is now: NodeHasNoDiskPressure
	I0203 12:28:37.390411   13136 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m39s (x2 over 5m39s)  kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientPID
	I0203 12:28:37.390411   13136 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m39s                  kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:37.390411   13136 command_runner.go:130] >   Normal  RegisteredNode           5m38s                  node-controller  Node multinode-749300-m03 event: Registered Node multinode-749300-m03 in Controller
	I0203 12:28:37.390411   13136 command_runner.go:130] >   Normal  NodeReady                5m24s                  kubelet          Node multinode-749300-m03 status is now: NodeReady
	I0203 12:28:37.390411   13136 command_runner.go:130] >   Normal  NodeNotReady             3m47s                  node-controller  Node multinode-749300-m03 status is now: NodeNotReady
	I0203 12:28:37.390481   13136 command_runner.go:130] >   Normal  RegisteredNode           69s                    node-controller  Node multinode-749300-m03 event: Registered Node multinode-749300-m03 in Controller
	I0203 12:28:37.400039   13136 logs.go:123] Gathering logs for kube-scheduler [88c40ca9aa3c] ...
	I0203 12:28:37.400039   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c40ca9aa3c"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! I0203 12:04:50.173813       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.061949       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.062136       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.062240       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.062322       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0203 12:28:37.430714   13136 command_runner.go:130] ! I0203 12:04:52.183111       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! I0203 12:04:52.183265       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:37.430714   13136 command_runner.go:130] ! I0203 12:04:52.186981       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0203 12:28:37.430714   13136 command_runner.go:130] ! I0203 12:04:52.187238       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! I0203 12:04:52.187329       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:37.430714   13136 command_runner.go:130] ! I0203 12:04:52.190286       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.193791       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0203 12:28:37.430714   13136 command_runner.go:130] ! E0203 12:04:52.193853       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.194153       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0203 12:28:37.430714   13136 command_runner.go:130] ! E0203 12:04:52.194308       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.194637       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:37.430714   13136 command_runner.go:130] ! E0203 12:04:52.195017       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.194800       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0203 12:28:37.430714   13136 command_runner.go:130] ! E0203 12:04:52.195139       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.194975       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0203 12:28:37.430714   13136 command_runner.go:130] ! E0203 12:04:52.195284       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.196729       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0203 12:28:37.430714   13136 command_runner.go:130] ! E0203 12:04:52.197161       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.196961       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0203 12:28:37.430714   13136 command_runner.go:130] ! E0203 12:04:52.197453       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.197005       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:37.430714   13136 command_runner.go:130] ! E0203 12:04:52.197828       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.430714   13136 command_runner.go:130] ! W0203 12:04:52.197050       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0203 12:28:37.431981   13136 command_runner.go:130] ! E0203 12:04:52.198044       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.432042   13136 command_runner.go:130] ! W0203 12:04:52.197096       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0203 12:28:37.432042   13136 command_runner.go:130] ! E0203 12:04:52.198641       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.432103   13136 command_runner.go:130] ! W0203 12:04:52.200812       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:37.432208   13136 command_runner.go:130] ! E0203 12:04:52.201002       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0203 12:28:37.432208   13136 command_runner.go:130] ! W0203 12:04:52.201197       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0203 12:28:37.432255   13136 command_runner.go:130] ! E0203 12:04:52.201287       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.432299   13136 command_runner.go:130] ! W0203 12:04:52.201462       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:37.432350   13136 command_runner.go:130] ! E0203 12:04:52.201749       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.432399   13136 command_runner.go:130] ! W0203 12:04:52.203997       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0203 12:28:37.432446   13136 command_runner.go:130] ! E0203 12:04:52.204039       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.432495   13136 command_runner.go:130] ! W0203 12:04:52.204263       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:37.432495   13136 command_runner.go:130] ! E0203 12:04:52.204370       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.432545   13136 command_runner.go:130] ! W0203 12:04:52.204862       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:37.432646   13136 command_runner.go:130] ! E0203 12:04:52.205088       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.432691   13136 command_runner.go:130] ! W0203 12:04:53.007728       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:37.432691   13136 command_runner.go:130] ! E0203 12:04:53.008599       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.432691   13136 command_runner.go:130] ! W0203 12:04:53.048183       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0203 12:28:37.432798   13136 command_runner.go:130] ! E0203 12:04:53.048434       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.432798   13136 command_runner.go:130] ! W0203 12:04:53.164447       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0203 12:28:37.432870   13136 command_runner.go:130] ! E0203 12:04:53.165061       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.432870   13136 command_runner.go:130] ! W0203 12:04:53.169067       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0203 12:28:37.432952   13136 command_runner.go:130] ! E0203 12:04:53.169917       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.432952   13136 command_runner.go:130] ! W0203 12:04:53.247439       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:37.432952   13136 command_runner.go:130] ! E0203 12:04:53.247628       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.433042   13136 command_runner.go:130] ! W0203 12:04:53.427203       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0203 12:28:37.433042   13136 command_runner.go:130] ! E0203 12:04:53.427543       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.433122   13136 command_runner.go:130] ! W0203 12:04:53.471735       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:37.433122   13136 command_runner.go:130] ! E0203 12:04:53.471980       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.433193   13136 command_runner.go:130] ! W0203 12:04:53.482216       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0203 12:28:37.433273   13136 command_runner.go:130] ! E0203 12:04:53.482267       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.433273   13136 command_runner.go:130] ! W0203 12:04:53.497579       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0203 12:28:37.433345   13136 command_runner.go:130] ! E0203 12:04:53.497628       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.433398   13136 command_runner.go:130] ! W0203 12:04:53.544588       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:37.433398   13136 command_runner.go:130] ! E0203 12:04:53.545097       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0203 12:28:37.433480   13136 command_runner.go:130] ! W0203 12:04:53.614992       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0203 12:28:37.433523   13136 command_runner.go:130] ! E0203 12:04:53.615323       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.433523   13136 command_runner.go:130] ! W0203 12:04:53.655102       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0203 12:28:37.433579   13136 command_runner.go:130] ! E0203 12:04:53.655499       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.433628   13136 command_runner.go:130] ! W0203 12:04:53.655303       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0203 12:28:37.433684   13136 command_runner.go:130] ! E0203 12:04:53.656094       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.433684   13136 command_runner.go:130] ! W0203 12:04:53.713710       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:37.433684   13136 command_runner.go:130] ! E0203 12:04:53.713767       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.433765   13136 command_runner.go:130] ! W0203 12:04:53.764352       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0203 12:28:37.433819   13136 command_runner.go:130] ! E0203 12:04:53.764706       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.433862   13136 command_runner.go:130] ! W0203 12:04:53.799751       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:37.433913   13136 command_runner.go:130] ! E0203 12:04:53.800034       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:37.433972   13136 command_runner.go:130] ! I0203 12:04:56.288855       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:37.433972   13136 command_runner.go:130] ! I0203 12:25:02.182209       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0203 12:28:37.433972   13136 command_runner.go:130] ! I0203 12:25:02.205551       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 12:28:37.434045   13136 command_runner.go:130] ! I0203 12:25:02.205980       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0203 12:28:37.434045   13136 command_runner.go:130] ! E0203 12:25:02.233103       1 run.go:72] "command failed" err="finished without leader elect"
	I0203 12:28:37.446891   13136 logs.go:123] Gathering logs for kube-proxy [cf33452e7244] ...
	I0203 12:28:37.446891   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf33452e7244"
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:27.874759       1 server_linux.go:66] "Using iptables proxy"
	I0203 12:28:37.475222   13136 command_runner.go:130] ! E0203 12:27:28.000541       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:37.475222   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0203 12:28:37.475222   13136 command_runner.go:130] ! 	add table ip kube-proxy
	I0203 12:28:37.475222   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:37.475222   13136 command_runner.go:130] !  >
	I0203 12:28:37.475222   13136 command_runner.go:130] ! E0203 12:27:28.027381       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:37.475222   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0203 12:28:37.475222   13136 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0203 12:28:37.475222   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:37.475222   13136 command_runner.go:130] !  >
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.187333       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.12.244"]
	I0203 12:28:37.475222   13136 command_runner.go:130] ! E0203 12:27:28.189467       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.571807       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.573724       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.574028       1 server_linux.go:170] "Using iptables Proxier"
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.580953       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.586727       1 server.go:497] "Version info" version="v1.32.1"
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.590708       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.619546       1 config.go:199] "Starting service config controller"
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.621538       1 config.go:105] "Starting endpoint slice config controller"
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.621733       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.623181       1 config.go:329] "Starting node config controller"
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.623915       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.626746       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.627120       1 shared_informer.go:320] Caches are synced for service config
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.722206       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0203 12:28:37.475222   13136 command_runner.go:130] ! I0203 12:27:28.724853       1 shared_informer.go:320] Caches are synced for node config
	I0203 12:28:37.478951   13136 logs.go:123] Gathering logs for kube-proxy [c6dc514e98f6] ...
	I0203 12:28:37.478951   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6dc514e98f6"
	I0203 12:28:37.505336   13136 command_runner.go:130] ! I0203 12:05:01.746820       1 server_linux.go:66] "Using iptables proxy"
	I0203 12:28:37.506127   13136 command_runner.go:130] ! E0203 12:05:01.780088       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:37.506127   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0203 12:28:37.506183   13136 command_runner.go:130] ! 	add table ip kube-proxy
	I0203 12:28:37.506183   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:37.506183   13136 command_runner.go:130] !  >
	I0203 12:28:37.506183   13136 command_runner.go:130] ! E0203 12:05:01.805329       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:37.506183   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0203 12:28:37.506183   13136 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0203 12:28:37.506183   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:37.506183   13136 command_runner.go:130] !  >
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.822582       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.1.53"]
	I0203 12:28:37.506183   13136 command_runner.go:130] ! E0203 12:05:01.822737       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.878001       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.878049       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.878079       1 server_linux.go:170] "Using iptables Proxier"
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.883741       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.884139       1 server.go:497] "Version info" version="v1.32.1"
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.884172       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.886194       1 config.go:199] "Starting service config controller"
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.886246       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.886272       1 config.go:105] "Starting endpoint slice config controller"
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.886277       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.886976       1 config.go:329] "Starting node config controller"
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.887004       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.987328       1 shared_informer.go:320] Caches are synced for node config
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.987379       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0203 12:28:37.506183   13136 command_runner.go:130] ! I0203 12:05:01.987536       1 shared_informer.go:320] Caches are synced for service config
	I0203 12:28:37.509378   13136 logs.go:123] Gathering logs for kube-controller-manager [fa5ab1df8985] ...
	I0203 12:28:37.509459   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa5ab1df8985"
	I0203 12:28:37.549272   13136 command_runner.go:130] ! I0203 12:27:22.909691       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:37.549272   13136 command_runner.go:130] ! I0203 12:27:23.402652       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0203 12:28:37.549394   13136 command_runner.go:130] ! I0203 12:27:23.402986       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:37.549394   13136 command_runner.go:130] ! I0203 12:27:23.406564       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:37.549520   13136 command_runner.go:130] ! I0203 12:27:23.406976       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:37.549520   13136 command_runner.go:130] ! I0203 12:27:23.407714       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0203 12:28:37.549520   13136 command_runner.go:130] ! I0203 12:27:23.407940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:37.549520   13136 command_runner.go:130] ! I0203 12:27:26.898379       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0203 12:28:37.549520   13136 command_runner.go:130] ! I0203 12:27:26.903089       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0203 12:28:37.549629   13136 command_runner.go:130] ! I0203 12:27:26.920491       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0203 12:28:37.549629   13136 command_runner.go:130] ! I0203 12:27:26.921386       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0203 12:28:37.549629   13136 command_runner.go:130] ! I0203 12:27:26.921411       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0203 12:28:37.549629   13136 command_runner.go:130] ! I0203 12:27:26.927675       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0203 12:28:37.549629   13136 command_runner.go:130] ! I0203 12:27:26.928004       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0203 12:28:37.549629   13136 command_runner.go:130] ! I0203 12:27:26.928034       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0203 12:28:37.549733   13136 command_runner.go:130] ! I0203 12:27:26.930586       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0203 12:28:37.549733   13136 command_runner.go:130] ! I0203 12:27:26.930784       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0203 12:28:37.549733   13136 command_runner.go:130] ! I0203 12:27:26.930813       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0203 12:28:37.549733   13136 command_runner.go:130] ! I0203 12:27:26.933480       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0203 12:28:37.549837   13136 command_runner.go:130] ! I0203 12:27:26.933510       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0203 12:28:37.549837   13136 command_runner.go:130] ! I0203 12:27:26.933688       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0203 12:28:37.549837   13136 command_runner.go:130] ! I0203 12:27:26.937614       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0203 12:28:37.549837   13136 command_runner.go:130] ! I0203 12:27:26.937802       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0203 12:28:37.549837   13136 command_runner.go:130] ! I0203 12:27:26.937815       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0203 12:28:37.549932   13136 command_runner.go:130] ! I0203 12:27:26.941806       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0203 12:28:37.549932   13136 command_runner.go:130] ! I0203 12:27:26.942027       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0203 12:28:37.549932   13136 command_runner.go:130] ! I0203 12:27:26.942037       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0203 12:28:37.549932   13136 command_runner.go:130] ! W0203 12:27:26.985553       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0203 12:28:37.550022   13136 command_runner.go:130] ! I0203 12:27:27.000401       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0203 12:28:37.550022   13136 command_runner.go:130] ! I0203 12:27:27.000471       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0203 12:28:37.550022   13136 command_runner.go:130] ! I0203 12:27:27.002441       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0203 12:28:37.550022   13136 command_runner.go:130] ! I0203 12:27:27.002463       1 shared_informer.go:313] Waiting for caches to sync for node
	I0203 12:28:37.550074   13136 command_runner.go:130] ! I0203 12:27:27.005161       1 shared_informer.go:320] Caches are synced for tokens
	I0203 12:28:37.550074   13136 command_runner.go:130] ! I0203 12:27:27.005494       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0203 12:28:37.550129   13136 command_runner.go:130] ! I0203 12:27:27.005531       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0203 12:28:37.550129   13136 command_runner.go:130] ! I0203 12:27:27.006525       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0203 12:28:37.550129   13136 command_runner.go:130] ! I0203 12:27:27.006554       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0203 12:28:37.550199   13136 command_runner.go:130] ! I0203 12:27:27.006561       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0203 12:28:37.550199   13136 command_runner.go:130] ! I0203 12:27:27.018211       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0203 12:28:37.550199   13136 command_runner.go:130] ! I0203 12:27:27.020298       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:37.550260   13136 command_runner.go:130] ! I0203 12:27:27.020315       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0203 12:28:37.550260   13136 command_runner.go:130] ! I0203 12:27:27.020476       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:37.550310   13136 command_runner.go:130] ! I0203 12:27:27.020496       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0203 12:28:37.550310   13136 command_runner.go:130] ! I0203 12:27:27.020523       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0203 12:28:37.550310   13136 command_runner.go:130] ! I0203 12:27:27.020531       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0203 12:28:37.550360   13136 command_runner.go:130] ! I0203 12:27:27.035455       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0203 12:28:37.550360   13136 command_runner.go:130] ! I0203 12:27:27.035474       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0203 12:28:37.550411   13136 command_runner.go:130] ! I0203 12:27:27.036405       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0203 12:28:37.550411   13136 command_runner.go:130] ! I0203 12:27:27.036423       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0203 12:28:37.550456   13136 command_runner.go:130] ! I0203 12:27:27.036035       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0203 12:28:37.550456   13136 command_runner.go:130] ! I0203 12:27:27.044089       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0203 12:28:37.550506   13136 command_runner.go:130] ! I0203 12:27:27.044099       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0203 12:28:37.550506   13136 command_runner.go:130] ! I0203 12:27:27.055692       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0203 12:28:37.550506   13136 command_runner.go:130] ! I0203 12:27:27.056054       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0203 12:28:37.550552   13136 command_runner.go:130] ! I0203 12:27:27.056069       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0203 12:28:37.550552   13136 command_runner.go:130] ! I0203 12:27:27.078626       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0203 12:28:37.550552   13136 command_runner.go:130] ! I0203 12:27:27.078816       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0203 12:28:37.550601   13136 command_runner.go:130] ! I0203 12:27:27.078939       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0203 12:28:37.550601   13136 command_runner.go:130] ! I0203 12:27:27.078953       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0203 12:28:37.550601   13136 command_runner.go:130] ! I0203 12:27:27.092379       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0203 12:28:37.550646   13136 command_runner.go:130] ! I0203 12:27:27.092403       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0203 12:28:37.550695   13136 command_runner.go:130] ! I0203 12:27:27.092472       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:37.550695   13136 command_runner.go:130] ! I0203 12:27:27.093806       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0203 12:28:37.550741   13136 command_runner.go:130] ! I0203 12:27:27.094076       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0203 12:28:37.550741   13136 command_runner.go:130] ! I0203 12:27:27.094201       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:37.550802   13136 command_runner.go:130] ! I0203 12:27:27.094716       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0203 12:28:37.550802   13136 command_runner.go:130] ! I0203 12:27:27.095015       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:37.550802   13136 command_runner.go:130] ! I0203 12:27:27.095085       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:37.550847   13136 command_runner.go:130] ! I0203 12:27:27.095525       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0203 12:28:37.550847   13136 command_runner.go:130] ! I0203 12:27:27.095975       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0203 12:28:37.550896   13136 command_runner.go:130] ! I0203 12:27:27.095995       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0203 12:28:37.550941   13136 command_runner.go:130] ! I0203 12:27:27.096141       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:37.550941   13136 command_runner.go:130] ! I0203 12:27:27.105052       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0203 12:28:37.551036   13136 command_runner.go:130] ! I0203 12:27:27.108021       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0203 12:28:37.551096   13136 command_runner.go:130] ! I0203 12:27:27.108044       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0203 12:28:37.551134   13136 command_runner.go:130] ! I0203 12:27:27.108849       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0203 12:28:37.551177   13136 command_runner.go:130] ! I0203 12:27:27.111028       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0203 12:28:37.551177   13136 command_runner.go:130] ! I0203 12:27:27.111046       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0203 12:28:37.551220   13136 command_runner.go:130] ! I0203 12:27:27.178113       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0203 12:28:37.551220   13136 command_runner.go:130] ! I0203 12:27:27.178273       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0203 12:28:37.551262   13136 command_runner.go:130] ! I0203 12:27:27.181884       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0203 12:28:37.551262   13136 command_runner.go:130] ! I0203 12:27:27.182308       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0203 12:28:37.551343   13136 command_runner.go:130] ! I0203 12:27:27.182384       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0203 12:28:37.551343   13136 command_runner.go:130] ! I0203 12:27:27.182422       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0203 12:28:37.551387   13136 command_runner.go:130] ! I0203 12:27:27.220586       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0203 12:28:37.551387   13136 command_runner.go:130] ! I0203 12:27:27.220908       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0203 12:28:37.551430   13136 command_runner.go:130] ! I0203 12:27:27.221122       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0203 12:28:37.551430   13136 command_runner.go:130] ! I0203 12:27:27.254107       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0203 12:28:37.551469   13136 command_runner.go:130] ! I0203 12:27:27.259526       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0203 12:28:37.551469   13136 command_runner.go:130] ! I0203 12:27:27.259566       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0203 12:28:37.551519   13136 command_runner.go:130] ! I0203 12:27:27.259616       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0203 12:28:37.551519   13136 command_runner.go:130] ! I0203 12:27:27.259642       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0203 12:28:37.551564   13136 command_runner.go:130] ! W0203 12:27:27.259665       1 shared_informer.go:597] resyncPeriod 16h18m36.581327018s is smaller than resyncCheckPeriod 16h18m48.925429448s and the informer has already started. Changing it to 16h18m48.925429448s
	I0203 12:28:37.551564   13136 command_runner.go:130] ! I0203 12:27:27.259798       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0203 12:28:37.551607   13136 command_runner.go:130] ! I0203 12:27:27.259831       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0203 12:28:37.551647   13136 command_runner.go:130] ! I0203 12:27:27.259851       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0203 12:28:37.551647   13136 command_runner.go:130] ! I0203 12:27:27.259880       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0203 12:28:37.551689   13136 command_runner.go:130] ! I0203 12:27:27.259900       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0203 12:28:37.551689   13136 command_runner.go:130] ! I0203 12:27:27.259918       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0203 12:28:37.551733   13136 command_runner.go:130] ! I0203 12:27:27.259931       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0203 12:28:37.551776   13136 command_runner.go:130] ! I0203 12:27:27.259951       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0203 12:28:37.551776   13136 command_runner.go:130] ! I0203 12:27:27.259973       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0203 12:28:37.551815   13136 command_runner.go:130] ! I0203 12:27:27.259996       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0203 12:28:37.551858   13136 command_runner.go:130] ! I0203 12:27:27.260019       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0203 12:28:37.551858   13136 command_runner.go:130] ! I0203 12:27:27.260033       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0203 12:28:37.551902   13136 command_runner.go:130] ! W0203 12:27:27.260043       1 shared_informer.go:597] resyncPeriod 12h21m15.604254037s is smaller than resyncCheckPeriod 16h18m48.925429448s and the informer has already started. Changing it to 16h18m48.925429448s
	I0203 12:28:37.551902   13136 command_runner.go:130] ! I0203 12:27:27.260097       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0203 12:28:37.551946   13136 command_runner.go:130] ! I0203 12:27:27.260171       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0203 12:28:37.551984   13136 command_runner.go:130] ! I0203 12:27:27.260229       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0203 12:28:37.551984   13136 command_runner.go:130] ! I0203 12:27:27.260265       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0203 12:28:37.552029   13136 command_runner.go:130] ! I0203 12:27:27.260486       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0203 12:28:37.552029   13136 command_runner.go:130] ! I0203 12:27:27.260501       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:37.552073   13136 command_runner.go:130] ! I0203 12:27:27.260524       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0203 12:28:37.552073   13136 command_runner.go:130] ! I0203 12:27:27.267963       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0203 12:28:37.552073   13136 command_runner.go:130] ! I0203 12:27:27.267980       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0203 12:28:37.552117   13136 command_runner.go:130] ! I0203 12:27:27.268261       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0203 12:28:37.552117   13136 command_runner.go:130] ! I0203 12:27:27.268271       1 shared_informer.go:313] Waiting for caches to sync for job
	I0203 12:28:37.552156   13136 command_runner.go:130] ! I0203 12:27:27.275304       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0203 12:28:37.552156   13136 command_runner.go:130] ! I0203 12:27:27.275791       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0203 12:28:37.552200   13136 command_runner.go:130] ! I0203 12:27:27.275805       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0203 12:28:37.552244   13136 command_runner.go:130] ! I0203 12:27:27.282846       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0203 12:28:37.552287   13136 command_runner.go:130] ! I0203 12:27:27.285688       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0203 12:28:37.552287   13136 command_runner.go:130] ! I0203 12:27:27.285931       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0203 12:28:37.552325   13136 command_runner.go:130] ! I0203 12:27:27.285943       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0203 12:28:37.552325   13136 command_runner.go:130] ! I0203 12:27:27.285971       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0203 12:28:37.552368   13136 command_runner.go:130] ! I0203 12:27:27.285981       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0203 12:28:37.552368   13136 command_runner.go:130] ! I0203 12:27:27.294816       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0203 12:28:37.552413   13136 command_runner.go:130] ! I0203 12:27:27.294925       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0203 12:28:37.552413   13136 command_runner.go:130] ! I0203 12:27:27.294936       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0203 12:28:37.552456   13136 command_runner.go:130] ! I0203 12:27:27.318951       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0203 12:28:37.552456   13136 command_runner.go:130] ! I0203 12:27:27.319030       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0203 12:28:37.552496   13136 command_runner.go:130] ! I0203 12:27:27.319040       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0203 12:28:37.552496   13136 command_runner.go:130] ! I0203 12:27:27.355026       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0203 12:28:37.552543   13136 command_runner.go:130] ! I0203 12:27:27.355145       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0203 12:28:37.552543   13136 command_runner.go:130] ! I0203 12:27:27.355157       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0203 12:28:37.552543   13136 command_runner.go:130] ! I0203 12:27:27.502334       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0203 12:28:37.552543   13136 command_runner.go:130] ! I0203 12:27:27.502612       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:37.552543   13136 command_runner.go:130] ! I0203 12:27:27.503231       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0203 12:28:37.552543   13136 command_runner.go:130] ! I0203 12:27:27.503509       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0203 12:28:37.552614   13136 command_runner.go:130] ! I0203 12:27:27.601804       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0203 12:28:37.552614   13136 command_runner.go:130] ! I0203 12:27:27.601861       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0203 12:28:37.552614   13136 command_runner.go:130] ! I0203 12:27:27.702241       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0203 12:28:37.552614   13136 command_runner.go:130] ! I0203 12:27:27.702332       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0203 12:28:37.552614   13136 command_runner.go:130] ! I0203 12:27:27.702378       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0203 12:28:37.552701   13136 command_runner.go:130] ! I0203 12:27:27.702389       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0203 12:28:37.552701   13136 command_runner.go:130] ! I0203 12:27:27.752020       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0203 12:28:37.552734   13136 command_runner.go:130] ! I0203 12:27:27.752619       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0203 12:28:37.552734   13136 command_runner.go:130] ! I0203 12:27:27.752706       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0203 12:28:37.552734   13136 command_runner.go:130] ! I0203 12:27:27.803085       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0203 12:28:37.552793   13136 command_runner.go:130] ! I0203 12:27:27.803455       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0203 12:28:37.552793   13136 command_runner.go:130] ! I0203 12:27:27.803481       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0203 12:28:37.552793   13136 command_runner.go:130] ! I0203 12:27:27.855074       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0203 12:28:37.552836   13136 command_runner.go:130] ! I0203 12:27:27.855248       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0203 12:28:37.552868   13136 command_runner.go:130] ! I0203 12:27:27.855184       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0203 12:28:37.552868   13136 command_runner.go:130] ! I0203 12:27:27.855399       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0203 12:28:37.552868   13136 command_runner.go:130] ! I0203 12:27:27.906335       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0203 12:28:37.552932   13136 command_runner.go:130] ! I0203 12:27:27.906694       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0203 12:28:37.552932   13136 command_runner.go:130] ! I0203 12:27:27.906991       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0203 12:28:37.552963   13136 command_runner.go:130] ! I0203 12:27:27.907151       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0203 12:28:37.552963   13136 command_runner.go:130] ! I0203 12:27:27.952285       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0203 12:28:37.552963   13136 command_runner.go:130] ! I0203 12:27:27.952811       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0203 12:28:37.553021   13136 command_runner.go:130] ! I0203 12:27:27.953099       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0203 12:28:37.553021   13136 command_runner.go:130] ! I0203 12:27:28.007756       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0203 12:28:37.553021   13136 command_runner.go:130] ! I0203 12:27:28.008110       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0203 12:28:37.553085   13136 command_runner.go:130] ! I0203 12:27:28.008081       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0203 12:28:37.553085   13136 command_runner.go:130] ! I0203 12:27:28.008316       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0203 12:28:37.553085   13136 command_runner.go:130] ! I0203 12:27:28.056312       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0203 12:28:37.553151   13136 command_runner.go:130] ! I0203 12:27:28.059984       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0203 12:28:37.553151   13136 command_runner.go:130] ! I0203 12:27:28.060009       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0203 12:28:37.553183   13136 command_runner.go:130] ! I0203 12:27:28.076985       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:37.553183   13136 command_runner.go:130] ! I0203 12:27:28.123054       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300\" does not exist"
	I0203 12:28:37.553252   13136 command_runner.go:130] ! I0203 12:27:28.125466       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m02\" does not exist"
	I0203 12:28:37.553283   13136 command_runner.go:130] ! I0203 12:27:28.127487       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m03\" does not exist"
	I0203 12:28:37.553312   13136 command_runner.go:130] ! I0203 12:27:28.128305       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0203 12:28:37.553312   13136 command_runner.go:130] ! I0203 12:27:28.130715       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:37.553312   13136 command_runner.go:130] ! I0203 12:27:28.131611       1 shared_informer.go:320] Caches are synced for cronjob
	I0203 12:28:37.553312   13136 command_runner.go:130] ! I0203 12:27:28.137580       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0203 12:28:37.553378   13136 command_runner.go:130] ! I0203 12:27:28.142883       1 shared_informer.go:320] Caches are synced for TTL
	I0203 12:28:37.553378   13136 command_runner.go:130] ! I0203 12:27:28.155436       1 shared_informer.go:320] Caches are synced for daemon sets
	I0203 12:28:37.553378   13136 command_runner.go:130] ! I0203 12:27:28.169742       1 shared_informer.go:320] Caches are synced for crt configmap
	I0203 12:28:37.553408   13136 command_runner.go:130] ! I0203 12:27:28.178458       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0203 12:28:37.553408   13136 command_runner.go:130] ! I0203 12:27:28.179559       1 shared_informer.go:320] Caches are synced for job
	I0203 12:28:37.553462   13136 command_runner.go:130] ! I0203 12:27:28.184280       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0203 12:28:37.553462   13136 command_runner.go:130] ! I0203 12:27:28.184866       1 shared_informer.go:320] Caches are synced for endpoint
	I0203 12:28:37.553462   13136 command_runner.go:130] ! I0203 12:27:28.185203       1 shared_informer.go:320] Caches are synced for persistent volume
	I0203 12:28:37.553504   13136 command_runner.go:130] ! I0203 12:27:28.188183       1 shared_informer.go:320] Caches are synced for disruption
	I0203 12:28:37.553528   13136 command_runner.go:130] ! I0203 12:27:28.191185       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0203 12:28:37.553528   13136 command_runner.go:130] ! I0203 12:27:28.192463       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0203 12:28:37.553528   13136 command_runner.go:130] ! I0203 12:27:28.192932       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0203 12:28:37.553528   13136 command_runner.go:130] ! I0203 12:27:28.195813       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:37.553528   13136 command_runner.go:130] ! I0203 12:27:28.197022       1 shared_informer.go:320] Caches are synced for expand
	I0203 12:28:37.553594   13136 command_runner.go:130] ! I0203 12:27:28.197371       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0203 12:28:37.553594   13136 command_runner.go:130] ! I0203 12:27:28.203607       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0203 12:28:37.553624   13136 command_runner.go:130] ! I0203 12:27:28.205940       1 shared_informer.go:320] Caches are synced for node
	I0203 12:28:37.553624   13136 command_runner.go:130] ! I0203 12:27:28.206428       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0203 12:28:37.553624   13136 command_runner.go:130] ! I0203 12:27:28.206719       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0203 12:28:37.553624   13136 command_runner.go:130] ! I0203 12:27:28.206743       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0203 12:28:37.553684   13136 command_runner.go:130] ! I0203 12:27:28.206759       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0203 12:28:37.553714   13136 command_runner.go:130] ! I0203 12:27:28.207125       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.553714   13136 command_runner.go:130] ! I0203 12:27:28.207167       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.553749   13136 command_runner.go:130] ! I0203 12:27:28.207249       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.553749   13136 command_runner.go:130] ! I0203 12:27:28.207497       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0203 12:28:37.553749   13136 command_runner.go:130] ! I0203 12:27:28.212287       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0203 12:28:37.553790   13136 command_runner.go:130] ! I0203 12:27:28.212651       1 shared_informer.go:320] Caches are synced for taint
	I0203 12:28:37.553790   13136 command_runner.go:130] ! I0203 12:27:28.216545       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0203 12:28:37.553790   13136 command_runner.go:130] ! I0203 12:27:28.213230       1 shared_informer.go:320] Caches are synced for GC
	I0203 12:28:37.553790   13136 command_runner.go:130] ! I0203 12:27:28.220697       1 shared_informer.go:320] Caches are synced for PV protection
	I0203 12:28:37.553790   13136 command_runner.go:130] ! I0203 12:27:28.221685       1 shared_informer.go:320] Caches are synced for namespace
	I0203 12:28:37.553858   13136 command_runner.go:130] ! I0203 12:27:28.223956       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0203 12:28:37.553889   13136 command_runner.go:130] ! I0203 12:27:28.214977       1 shared_informer.go:320] Caches are synced for ephemeral
	I0203 12:28:37.553889   13136 command_runner.go:130] ! I0203 12:27:28.215855       1 shared_informer.go:320] Caches are synced for attach detach
	I0203 12:28:37.553889   13136 command_runner.go:130] ! I0203 12:27:28.229339       1 shared_informer.go:320] Caches are synced for deployment
	I0203 12:28:37.553889   13136 command_runner.go:130] ! I0203 12:27:28.231152       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:37.553889   13136 command_runner.go:130] ! I0203 12:27:28.240053       1 shared_informer.go:320] Caches are synced for stateful set
	I0203 12:28:37.553945   13136 command_runner.go:130] ! I0203 12:27:28.244571       1 shared_informer.go:320] Caches are synced for HPA
	I0203 12:28:37.553945   13136 command_runner.go:130] ! I0203 12:27:28.253632       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0203 12:28:37.553988   13136 command_runner.go:130] ! I0203 12:27:28.253905       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:37.554012   13136 command_runner.go:130] ! I0203 12:27:28.254335       1 shared_informer.go:320] Caches are synced for PVC protection
	I0203 12:28:37.554012   13136 command_runner.go:130] ! I0203 12:27:28.256579       1 shared_informer.go:320] Caches are synced for service account
	I0203 12:28:37.554012   13136 command_runner.go:130] ! I0203 12:27:28.261559       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:37.554012   13136 command_runner.go:130] ! I0203 12:27:28.272196       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.554084   13136 command_runner.go:130] ! I0203 12:27:28.278627       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m02"
	I0203 12:28:37.554084   13136 command_runner.go:130] ! I0203 12:27:28.278875       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m03"
	I0203 12:28:37.554114   13136 command_runner.go:130] ! I0203 12:27:28.279161       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300"
	I0203 12:28:37.554114   13136 command_runner.go:130] ! I0203 12:27:28.279427       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:37.554114   13136 command_runner.go:130] ! I0203 12:27:28.279877       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.554114   13136 command_runner.go:130] ! I0203 12:27:28.279830       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0203 12:28:37.554182   13136 command_runner.go:130] ! I0203 12:27:28.304983       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:37.554214   13136 command_runner.go:130] ! I0203 12:27:28.305231       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0203 12:28:37.554214   13136 command_runner.go:130] ! I0203 12:27:28.305564       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0203 12:28:37.554214   13136 command_runner.go:130] ! I0203 12:27:28.321623       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0203 12:28:37.554214   13136 command_runner.go:130] ! I0203 12:27:28.355620       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.554288   13136 command_runner.go:130] ! I0203 12:27:28.537851       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="345.769991ms"
	I0203 12:28:37.554288   13136 command_runner.go:130] ! I0203 12:27:28.538124       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="123.5µs"
	I0203 12:28:37.554319   13136 command_runner.go:130] ! I0203 12:27:28.549449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="358.01756ms"
	I0203 12:28:37.554319   13136 command_runner.go:130] ! I0203 12:27:28.551039       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="41.301µs"
	I0203 12:28:37.554319   13136 command_runner.go:130] ! I0203 12:27:38.365008       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.554374   13136 command_runner.go:130] ! I0203 12:28:10.033136       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.554374   13136 command_runner.go:130] ! I0203 12:28:10.034663       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:37.554374   13136 command_runner.go:130] ! I0203 12:28:10.065494       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.554443   13136 command_runner.go:130] ! I0203 12:28:13.309331       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.554443   13136 command_runner.go:130] ! I0203 12:28:18.332821       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.554443   13136 command_runner.go:130] ! I0203 12:28:18.352713       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.554485   13136 command_runner.go:130] ! I0203 12:28:18.408588       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="26.468372ms"
	I0203 12:28:37.554485   13136 command_runner.go:130] ! I0203 12:28:18.409083       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="46.101µs"
	I0203 12:28:37.554485   13136 command_runner.go:130] ! I0203 12:28:23.502598       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.554547   13136 command_runner.go:130] ! I0203 12:28:31.524388       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="21.544593ms"
	I0203 12:28:37.554547   13136 command_runner.go:130] ! I0203 12:28:31.524629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="171.802µs"
	I0203 12:28:37.554607   13136 command_runner.go:130] ! I0203 12:28:31.550980       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="91.601µs"
	I0203 12:28:37.554607   13136 command_runner.go:130] ! I0203 12:28:31.616132       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="36.896902ms"
	I0203 12:28:37.554607   13136 command_runner.go:130] ! I0203 12:28:31.618203       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="115.002µs"
	I0203 12:28:37.571137   13136 logs.go:123] Gathering logs for kube-controller-manager [8ade10c0fb09] ...
	I0203 12:28:37.571137   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ade10c0fb09"
	I0203 12:28:37.601512   13136 command_runner.go:130] ! I0203 12:04:50.328199       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:37.601558   13136 command_runner.go:130] ! I0203 12:04:50.683234       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0203 12:28:37.601558   13136 command_runner.go:130] ! I0203 12:04:50.683563       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:37.601558   13136 command_runner.go:130] ! I0203 12:04:50.687907       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:37.601651   13136 command_runner.go:130] ! I0203 12:04:50.687950       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0203 12:28:37.601651   13136 command_runner.go:130] ! I0203 12:04:50.687972       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:37.601651   13136 command_runner.go:130] ! I0203 12:04:50.687984       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:37.602051   13136 command_runner.go:130] ! I0203 12:04:55.071680       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0203 12:28:37.602051   13136 command_runner.go:130] ! I0203 12:04:55.072106       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0203 12:28:37.602051   13136 command_runner.go:130] ! I0203 12:04:55.089226       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0203 12:28:37.602051   13136 command_runner.go:130] ! I0203 12:04:55.089889       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0203 12:28:37.602051   13136 command_runner.go:130] ! I0203 12:04:55.091177       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0203 12:28:37.602051   13136 command_runner.go:130] ! I0203 12:04:55.113934       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0203 12:28:37.602201   13136 command_runner.go:130] ! I0203 12:04:55.114137       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:37.602201   13136 command_runner.go:130] ! I0203 12:04:55.114294       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0203 12:28:37.602242   13136 command_runner.go:130] ! I0203 12:04:55.115111       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0203 12:28:37.602242   13136 command_runner.go:130] ! I0203 12:04:55.143403       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0203 12:28:37.602242   13136 command_runner.go:130] ! I0203 12:04:55.146241       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0203 12:28:37.602340   13136 command_runner.go:130] ! I0203 12:04:55.146450       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0203 12:28:37.602340   13136 command_runner.go:130] ! I0203 12:04:55.167456       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0203 12:28:37.602340   13136 command_runner.go:130] ! I0203 12:04:55.168207       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0203 12:28:37.602340   13136 command_runner.go:130] ! I0203 12:04:55.169697       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0203 12:28:37.602340   13136 command_runner.go:130] ! I0203 12:04:55.170035       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0203 12:28:37.602340   13136 command_runner.go:130] ! I0203 12:04:55.172429       1 shared_informer.go:320] Caches are synced for tokens
	I0203 12:28:37.602340   13136 command_runner.go:130] ! W0203 12:04:55.207419       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0203 12:28:37.602340   13136 command_runner.go:130] ! I0203 12:04:55.220184       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0203 12:28:37.602496   13136 command_runner.go:130] ! I0203 12:04:55.220335       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0203 12:28:37.602496   13136 command_runner.go:130] ! I0203 12:04:55.220802       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0203 12:28:37.602496   13136 command_runner.go:130] ! I0203 12:04:55.220818       1 shared_informer.go:313] Waiting for caches to sync for node
	I0203 12:28:37.602496   13136 command_runner.go:130] ! I0203 12:04:55.236689       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0203 12:28:37.602496   13136 command_runner.go:130] ! I0203 12:04:55.236985       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0203 12:28:37.602496   13136 command_runner.go:130] ! I0203 12:04:55.237003       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0203 12:28:37.602606   13136 command_runner.go:130] ! I0203 12:04:55.260414       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0203 12:28:37.602606   13136 command_runner.go:130] ! I0203 12:04:55.260996       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0203 12:28:37.602606   13136 command_runner.go:130] ! I0203 12:04:55.261428       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0203 12:28:37.602606   13136 command_runner.go:130] ! I0203 12:04:55.289640       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0203 12:28:37.602692   13136 command_runner.go:130] ! I0203 12:04:55.289893       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0203 12:28:37.602692   13136 command_runner.go:130] ! I0203 12:04:55.290571       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0203 12:28:37.602692   13136 command_runner.go:130] ! I0203 12:04:55.290736       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0203 12:28:37.602692   13136 command_runner.go:130] ! I0203 12:04:55.314846       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0203 12:28:37.602775   13136 command_runner.go:130] ! I0203 12:04:55.315076       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0203 12:28:37.602775   13136 command_runner.go:130] ! I0203 12:04:55.315101       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0203 12:28:37.602775   13136 command_runner.go:130] ! I0203 12:04:55.319462       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0203 12:28:37.602775   13136 command_runner.go:130] ! I0203 12:04:55.319527       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0203 12:28:37.602859   13136 command_runner.go:130] ! I0203 12:04:55.319535       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0203 12:28:37.602859   13136 command_runner.go:130] ! I0203 12:04:55.319689       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0203 12:28:37.602859   13136 command_runner.go:130] ! I0203 12:04:55.319723       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0203 12:28:37.602859   13136 command_runner.go:130] ! I0203 12:04:55.319733       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0203 12:28:37.602859   13136 command_runner.go:130] ! I0203 12:04:55.446823       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0203 12:28:37.602949   13136 command_runner.go:130] ! I0203 12:04:55.446851       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0203 12:28:37.602949   13136 command_runner.go:130] ! I0203 12:04:55.446960       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0203 12:28:37.602949   13136 command_runner.go:130] ! I0203 12:04:55.446972       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0203 12:28:37.602949   13136 command_runner.go:130] ! I0203 12:04:55.579930       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0203 12:28:37.603034   13136 command_runner.go:130] ! I0203 12:04:55.580047       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0203 12:28:37.603034   13136 command_runner.go:130] ! I0203 12:04:55.580079       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0203 12:28:37.603034   13136 command_runner.go:130] ! I0203 12:04:55.730127       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0203 12:28:37.603034   13136 command_runner.go:130] ! I0203 12:04:55.730301       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0203 12:28:37.603118   13136 command_runner.go:130] ! I0203 12:04:55.730314       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0203 12:28:37.603118   13136 command_runner.go:130] ! I0203 12:04:55.889482       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0203 12:28:37.603282   13136 command_runner.go:130] ! I0203 12:04:55.889843       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:55.889907       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.030244       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.030535       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.030566       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.182222       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.183153       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.183191       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.226256       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.226280       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.226330       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.226371       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.226410       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.382971       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.383201       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.383218       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.687449       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.687532       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.687548       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.832606       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.832640       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0203 12:28:37.603660   13136 command_runner.go:130] ! I0203 12:04:56.832542       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0203 12:28:37.604200   13136 command_runner.go:130] ! I0203 12:04:56.984351       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0203 12:28:37.604200   13136 command_runner.go:130] ! I0203 12:04:56.984538       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0203 12:28:37.604200   13136 command_runner.go:130] ! I0203 12:04:56.984550       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0203 12:28:37.604200   13136 command_runner.go:130] ! I0203 12:04:57.130440       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0203 12:28:37.604200   13136 command_runner.go:130] ! I0203 12:04:57.131375       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0203 12:28:37.604200   13136 command_runner.go:130] ! I0203 12:04:57.131428       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0203 12:28:37.604200   13136 command_runner.go:130] ! I0203 12:04:57.284265       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:37.604200   13136 command_runner.go:130] ! I0203 12:04:57.284330       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:37.604200   13136 command_runner.go:130] ! I0203 12:04:57.284343       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0203 12:28:37.604200   13136 command_runner.go:130] ! I0203 12:04:57.431498       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0203 12:28:37.604378   13136 command_runner.go:130] ! I0203 12:04:57.431577       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0203 12:28:37.604378   13136 command_runner.go:130] ! I0203 12:04:57.432308       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0203 12:28:37.604378   13136 command_runner.go:130] ! I0203 12:04:57.580329       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0203 12:28:37.604378   13136 command_runner.go:130] ! I0203 12:04:57.580661       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0203 12:28:37.604469   13136 command_runner.go:130] ! I0203 12:04:57.580693       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0203 12:28:37.604469   13136 command_runner.go:130] ! I0203 12:04:57.730504       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0203 12:28:37.604469   13136 command_runner.go:130] ! I0203 12:04:57.730629       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0203 12:28:37.604469   13136 command_runner.go:130] ! I0203 12:04:57.730638       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0203 12:28:37.604561   13136 command_runner.go:130] ! I0203 12:04:57.730646       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:57.730719       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:57.730820       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:57.880536       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:57.880666       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:57.881079       1 shared_informer.go:313] Waiting for caches to sync for job
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.186601       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.186797       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187086       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! W0203 12:04:58.187187       1 shared_informer.go:597] resyncPeriod 18h8m42.862796871s is smaller than resyncCheckPeriod 21h1m9.302357504s and the informer has already started. Changing it to 21h1m9.302357504s
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187252       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187334       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187356       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187374       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187391       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187427       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187455       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! W0203 12:04:58.187474       1 shared_informer.go:597] resyncPeriod 19h41m25.464232572s is smaller than resyncCheckPeriod 21h1m9.302357504s and the informer has already started. Changing it to 21h1m9.302357504s
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187523       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187588       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187662       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187679       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187699       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.187967       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.188030       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0203 12:28:37.604607   13136 command_runner.go:130] ! I0203 12:04:58.188069       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0203 12:28:37.605143   13136 command_runner.go:130] ! I0203 12:04:58.188097       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0203 12:28:37.605143   13136 command_runner.go:130] ! I0203 12:04:58.188127       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0203 12:28:37.605143   13136 command_runner.go:130] ! I0203 12:04:58.188181       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0203 12:28:37.605143   13136 command_runner.go:130] ! I0203 12:04:58.188248       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0203 12:28:37.605143   13136 command_runner.go:130] ! I0203 12:04:58.188271       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:37.605237   13136 command_runner.go:130] ! I0203 12:04:58.188294       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0203 12:28:37.605237   13136 command_runner.go:130] ! I0203 12:04:58.434011       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0203 12:28:37.605237   13136 command_runner.go:130] ! I0203 12:04:58.434132       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0203 12:28:37.605237   13136 command_runner.go:130] ! I0203 12:04:58.434145       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0203 12:28:37.605237   13136 command_runner.go:130] ! I0203 12:04:58.476316       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0203 12:28:37.605330   13136 command_runner.go:130] ! I0203 12:04:58.478848       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0203 12:28:37.605330   13136 command_runner.go:130] ! I0203 12:04:58.478330       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0203 12:28:37.605330   13136 command_runner.go:130] ! I0203 12:04:58.478362       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:37.605415   13136 command_runner.go:130] ! I0203 12:04:58.478346       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0203 12:28:37.605415   13136 command_runner.go:130] ! I0203 12:04:58.479085       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0203 12:28:37.605415   13136 command_runner.go:130] ! I0203 12:04:58.478432       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0203 12:28:37.605415   13136 command_runner.go:130] ! I0203 12:04:58.479097       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0203 12:28:37.605501   13136 command_runner.go:130] ! I0203 12:04:58.478442       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:37.605501   13136 command_runner.go:130] ! I0203 12:04:58.478482       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0203 12:28:37.605501   13136 command_runner.go:130] ! I0203 12:04:58.479316       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:37.605501   13136 command_runner.go:130] ! I0203 12:04:58.478490       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:37.605586   13136 command_runner.go:130] ! I0203 12:04:58.478533       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:37.605586   13136 command_runner.go:130] ! I0203 12:04:58.630437       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0203 12:28:37.605586   13136 command_runner.go:130] ! I0203 12:04:58.630476       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0203 12:28:37.605586   13136 command_runner.go:130] ! I0203 12:04:58.630884       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0203 12:28:37.605586   13136 command_runner.go:130] ! I0203 12:04:58.630985       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0203 12:28:37.605681   13136 command_runner.go:130] ! I0203 12:04:58.825850       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0203 12:28:37.605681   13136 command_runner.go:130] ! I0203 12:04:58.826005       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0203 12:28:37.605681   13136 command_runner.go:130] ! I0203 12:04:59.025218       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0203 12:28:37.605721   13136 command_runner.go:130] ! I0203 12:04:59.025576       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0203 12:28:37.605749   13136 command_runner.go:130] ! I0203 12:04:59.025879       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0203 12:28:37.605749   13136 command_runner.go:130] ! I0203 12:04:59.026140       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0203 12:28:37.605793   13136 command_runner.go:130] ! I0203 12:04:59.076054       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0203 12:28:37.605833   13136 command_runner.go:130] ! I0203 12:04:59.076201       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0203 12:28:37.605877   13136 command_runner.go:130] ! I0203 12:04:59.229685       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0203 12:28:37.605918   13136 command_runner.go:130] ! I0203 12:04:59.229852       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0203 12:28:37.605918   13136 command_runner.go:130] ! I0203 12:04:59.384463       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0203 12:28:37.605963   13136 command_runner.go:130] ! I0203 12:04:59.384562       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0203 12:28:37.605963   13136 command_runner.go:130] ! I0203 12:04:59.384584       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0203 12:28:37.606003   13136 command_runner.go:130] ! I0203 12:04:59.384709       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0203 12:28:37.606003   13136 command_runner.go:130] ! I0203 12:04:59.384734       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0203 12:28:37.606052   13136 command_runner.go:130] ! I0203 12:04:59.531643       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0203 12:28:37.606093   13136 command_runner.go:130] ! I0203 12:04:59.535171       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0203 12:28:37.606093   13136 command_runner.go:130] ! I0203 12:04:59.535208       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0203 12:28:37.606138   13136 command_runner.go:130] ! I0203 12:04:59.555530       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:37.606138   13136 command_runner.go:130] ! I0203 12:04:59.582679       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300\" does not exist"
	I0203 12:28:37.606178   13136 command_runner.go:130] ! I0203 12:04:59.593117       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:37.606222   13136 command_runner.go:130] ! I0203 12:04:59.615597       1 shared_informer.go:320] Caches are synced for expand
	I0203 12:28:37.606222   13136 command_runner.go:130] ! I0203 12:04:59.619951       1 shared_informer.go:320] Caches are synced for taint
	I0203 12:28:37.606262   13136 command_runner.go:130] ! I0203 12:04:59.620233       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0203 12:28:37.606262   13136 command_runner.go:130] ! I0203 12:04:59.621144       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300"
	I0203 12:28:37.606307   13136 command_runner.go:130] ! I0203 12:04:59.621999       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0203 12:28:37.606347   13136 command_runner.go:130] ! I0203 12:04:59.620965       1 shared_informer.go:320] Caches are synced for node
	I0203 12:28:37.606347   13136 command_runner.go:130] ! I0203 12:04:59.622115       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0203 12:28:37.606391   13136 command_runner.go:130] ! I0203 12:04:59.622196       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0203 12:28:37.606391   13136 command_runner.go:130] ! I0203 12:04:59.622213       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0203 12:28:37.606431   13136 command_runner.go:130] ! I0203 12:04:59.622220       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0203 12:28:37.606431   13136 command_runner.go:130] ! I0203 12:04:59.627214       1 shared_informer.go:320] Caches are synced for disruption
	I0203 12:28:37.606475   13136 command_runner.go:130] ! I0203 12:04:59.627299       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0203 12:28:37.606475   13136 command_runner.go:130] ! I0203 12:04:59.627517       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0203 12:28:37.606514   13136 command_runner.go:130] ! I0203 12:04:59.630821       1 shared_informer.go:320] Caches are synced for persistent volume
	I0203 12:28:37.606514   13136 command_runner.go:130] ! I0203 12:04:59.631018       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0203 12:28:37.606558   13136 command_runner.go:130] ! I0203 12:04:59.631607       1 shared_informer.go:320] Caches are synced for PV protection
	I0203 12:28:37.606558   13136 command_runner.go:130] ! I0203 12:04:59.632152       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0203 12:28:37.606599   13136 command_runner.go:130] ! I0203 12:04:59.632358       1 shared_informer.go:320] Caches are synced for service account
	I0203 12:28:37.606599   13136 command_runner.go:130] ! I0203 12:04:59.632692       1 shared_informer.go:320] Caches are synced for cronjob
	I0203 12:28:37.606643   13136 command_runner.go:130] ! I0203 12:04:59.632840       1 shared_informer.go:320] Caches are synced for TTL
	I0203 12:28:37.606643   13136 command_runner.go:130] ! I0203 12:04:59.634133       1 shared_informer.go:320] Caches are synced for GC
	I0203 12:28:37.606643   13136 command_runner.go:130] ! I0203 12:04:59.634183       1 shared_informer.go:320] Caches are synced for namespace
	I0203 12:28:37.606682   13136 command_runner.go:130] ! I0203 12:04:59.637337       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0203 12:28:37.606727   13136 command_runner.go:130] ! I0203 12:04:59.637530       1 shared_informer.go:320] Caches are synced for crt configmap
	I0203 12:28:37.606727   13136 command_runner.go:130] ! I0203 12:04:59.644447       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300" podCIDRs=["10.244.0.0/24"]
	I0203 12:28:37.606767   13136 command_runner.go:130] ! I0203 12:04:59.644496       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.606811   13136 command_runner.go:130] ! I0203 12:04:59.644518       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.606811   13136 command_runner.go:130] ! I0203 12:04:59.647453       1 shared_informer.go:320] Caches are synced for deployment
	I0203 12:28:37.606851   13136 command_runner.go:130] ! I0203 12:04:59.647468       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0203 12:28:37.606851   13136 command_runner.go:130] ! I0203 12:04:59.661087       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:37.606895   13136 command_runner.go:130] ! I0203 12:04:59.662500       1 shared_informer.go:320] Caches are synced for ephemeral
	I0203 12:28:37.606895   13136 command_runner.go:130] ! I0203 12:04:59.679063       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0203 12:28:37.606934   13136 command_runner.go:130] ! I0203 12:04:59.679241       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0203 12:28:37.606934   13136 command_runner.go:130] ! I0203 12:04:59.679489       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:37.606978   13136 command_runner.go:130] ! I0203 12:04:59.679271       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0203 12:28:37.606978   13136 command_runner.go:130] ! I0203 12:04:59.680515       1 shared_informer.go:320] Caches are synced for daemon sets
	I0203 12:28:37.607018   13136 command_runner.go:130] ! I0203 12:04:59.680894       1 shared_informer.go:320] Caches are synced for stateful set
	I0203 12:28:37.607018   13136 command_runner.go:130] ! I0203 12:04:59.682157       1 shared_informer.go:320] Caches are synced for job
	I0203 12:28:37.607062   13136 command_runner.go:130] ! I0203 12:04:59.686733       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0203 12:28:37.607062   13136 command_runner.go:130] ! I0203 12:04:59.688328       1 shared_informer.go:320] Caches are synced for HPA
	I0203 12:28:37.607102   13136 command_runner.go:130] ! I0203 12:04:59.688383       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0203 12:28:37.607313   13136 command_runner.go:130] ! I0203 12:04:59.697934       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0203 12:28:37.607353   13136 command_runner.go:130] ! I0203 12:04:59.698063       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0203 12:28:37.607353   13136 command_runner.go:130] ! I0203 12:04:59.688399       1 shared_informer.go:320] Caches are synced for PVC protection
	I0203 12:28:37.607398   13136 command_runner.go:130] ! I0203 12:04:59.688409       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0203 12:28:37.607398   13136 command_runner.go:130] ! I0203 12:04:59.688419       1 shared_informer.go:320] Caches are synced for attach detach
	I0203 12:28:37.607438   13136 command_runner.go:130] ! I0203 12:04:59.688482       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:37.607438   13136 command_runner.go:130] ! I0203 12:04:59.697636       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:37.607481   13136 command_runner.go:130] ! I0203 12:04:59.697649       1 shared_informer.go:320] Caches are synced for endpoint
	I0203 12:28:37.607481   13136 command_runner.go:130] ! I0203 12:04:59.714625       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:37.607521   13136 command_runner.go:130] ! I0203 12:04:59.714677       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0203 12:28:37.607521   13136 command_runner.go:130] ! I0203 12:04:59.714688       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0203 12:28:37.607565   13136 command_runner.go:130] ! I0203 12:05:00.046777       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.607605   13136 command_runner.go:130] ! I0203 12:05:00.818009       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="311.273381ms"
	I0203 12:28:37.607605   13136 command_runner.go:130] ! I0203 12:05:00.848718       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="30.361418ms"
	I0203 12:28:37.607649   13136 command_runner.go:130] ! I0203 12:05:00.848801       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.501µs"
	I0203 12:28:37.607689   13136 command_runner.go:130] ! I0203 12:05:01.040466       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="91.174094ms"
	I0203 12:28:37.607733   13136 command_runner.go:130] ! I0203 12:05:01.060761       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.181113ms"
	I0203 12:28:37.607733   13136 command_runner.go:130] ! I0203 12:05:01.062232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="51.701µs"
	I0203 12:28:37.607773   13136 command_runner.go:130] ! I0203 12:05:21.819966       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.607773   13136 command_runner.go:130] ! I0203 12:05:21.843034       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.607817   13136 command_runner.go:130] ! I0203 12:05:21.853094       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="295.503µs"
	I0203 12:28:37.607857   13136 command_runner.go:130] ! I0203 12:05:21.889706       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="83.9µs"
	I0203 12:28:37.607857   13136 command_runner.go:130] ! I0203 12:05:23.170298       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="56.1µs"
	I0203 12:28:37.607902   13136 command_runner.go:130] ! I0203 12:05:24.187762       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="23.236374ms"
	I0203 12:28:37.607942   13136 command_runner.go:130] ! I0203 12:05:24.188513       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="90.9µs"
	I0203 12:28:37.607942   13136 command_runner.go:130] ! I0203 12:05:24.626780       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0203 12:28:37.607986   13136 command_runner.go:130] ! I0203 12:05:26.205271       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.608026   13136 command_runner.go:130] ! I0203 12:07:57.197252       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m02\" does not exist"
	I0203 12:28:37.608026   13136 command_runner.go:130] ! I0203 12:07:57.213772       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300-m02" podCIDRs=["10.244.1.0/24"]
	I0203 12:28:37.608070   13136 command_runner.go:130] ! I0203 12:07:57.214096       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608070   13136 command_runner.go:130] ! I0203 12:07:57.214387       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608111   13136 command_runner.go:130] ! I0203 12:07:57.243166       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608155   13136 command_runner.go:130] ! I0203 12:07:57.578919       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608155   13136 command_runner.go:130] ! I0203 12:07:58.163164       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608196   13136 command_runner.go:130] ! I0203 12:07:59.655130       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m02"
	I0203 12:28:37.608196   13136 command_runner.go:130] ! I0203 12:07:59.772999       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608240   13136 command_runner.go:130] ! I0203 12:08:07.534314       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608280   13136 command_runner.go:130] ! I0203 12:08:26.797682       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:37.608280   13136 command_runner.go:130] ! I0203 12:08:26.797755       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608399   13136 command_runner.go:130] ! I0203 12:08:26.813836       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:28.192212       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:29.680135       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:30.702586       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:51.029918       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="72.629315ms"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:51.048475       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="16.732326ms"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:51.049169       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="396.601µs"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:51.058159       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="35.9µs"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:51.069790       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="40.1µs"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:53.787260       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="12.580521ms"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:53.787659       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="70.201µs"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:53.939078       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="12.55302ms"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:53.939506       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="33.801µs"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:08:58.516195       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:09:01.710207       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:30.158978       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m03\" does not exist"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:30.160493       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:30.187436       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300-m03" podCIDRs=["10.244.2.0/24"]
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:30.187486       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:30.187520       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:30.195215       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:30.643712       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:31.194036       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:34.733168       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m03"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:34.818129       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:40.541982       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.608426   13136 command_runner.go:130] ! I0203 12:12:59.598308       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:37.608955   13136 command_runner.go:130] ! I0203 12:12:59.598384       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.608955   13136 command_runner.go:130] ! I0203 12:12:59.613509       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:12:59.761059       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:13:01.072377       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:13:02.975699       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:16:00.817386       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:17:16.830447       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:18:09.728117       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:20:44.872410       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:20:44.874163       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:20:44.902212       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:20:50.011997       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:21:07.487830       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:22:48.017949       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:22:48.044428       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:22:52.915959       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:22:58.370520       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:22:58.373994       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m03\" does not exist"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:22:58.409838       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300-m03" podCIDRs=["10.244.3.0/24"]
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:22:58.410167       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! E0203 12:22:58.438530       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-749300-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-749300-m03" podCIDRs=["10.244.4.0/24"]
	I0203 12:28:37.609000   13136 command_runner.go:130] ! E0203 12:22:58.438947       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-749300-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! E0203 12:22:58.439229       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-749300-m03': failed to patch node CIDR: Node \"multinode-749300-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:22:58.439401       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:22:58.444440       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609000   13136 command_runner.go:130] ! I0203 12:22:58.960922       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609535   13136 command_runner.go:130] ! I0203 12:22:59.994381       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609535   13136 command_runner.go:130] ! I0203 12:23:08.704715       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609535   13136 command_runner.go:130] ! I0203 12:23:13.216732       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609585   13136 command_runner.go:130] ! I0203 12:23:13.218582       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:37.609585   13136 command_runner.go:130] ! I0203 12:23:13.233034       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609634   13136 command_runner.go:130] ! I0203 12:23:14.968424       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609634   13136 command_runner.go:130] ! I0203 12:23:15.606788       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:37.609679   13136 command_runner.go:130] ! I0203 12:24:50.048901       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:37.609721   13136 command_runner.go:130] ! I0203 12:24:50.049506       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609721   13136 command_runner.go:130] ! I0203 12:24:50.231618       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.609765   13136 command_runner.go:130] ! I0203 12:24:55.449570       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:37.631157   13136 logs.go:123] Gathering logs for dmesg ...
	I0203 12:28:37.631157   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 12:28:37.652049   13136 command_runner.go:130] > [Feb 3 12:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +0.106774] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +0.023238] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +0.000004] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +0.060292] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +0.024825] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0203 12:28:37.652049   13136 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +6.580601] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +1.325226] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +1.308770] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0203 12:28:37.652049   13136 command_runner.go:130] > [Feb 3 12:26] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0203 12:28:37.652049   13136 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0203 12:28:37.652049   13136 command_runner.go:130] > [ +44.595913] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +0.095070] kauditd_printk_skb: 4 callbacks suppressed
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +0.080250] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [Feb 3 12:27] systemd-fstab-generator[1026]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +0.111210] kauditd_printk_skb: 75 callbacks suppressed
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +0.499536] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +0.200113] systemd-fstab-generator[1078]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +0.221690] systemd-fstab-generator[1092]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +2.970290] systemd-fstab-generator[1331]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +0.201836] systemd-fstab-generator[1343]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +0.192903] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +0.251653] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +0.851149] systemd-fstab-generator[1495]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +0.100990] kauditd_printk_skb: 206 callbacks suppressed
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +3.722313] systemd-fstab-generator[1639]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +1.365001] kauditd_printk_skb: 44 callbacks suppressed
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +5.747815] kauditd_printk_skb: 30 callbacks suppressed
	I0203 12:28:37.653070   13136 command_runner.go:130] > [  +3.773287] systemd-fstab-generator[2531]: Ignoring "noauto" option for root device
	I0203 12:28:37.653070   13136 command_runner.go:130] > [ +27.270277] kauditd_printk_skb: 70 callbacks suppressed
	I0203 12:28:37.654990   13136 logs.go:123] Gathering logs for coredns [edb5f00f1042] ...
	I0203 12:28:37.655070   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edb5f00f1042"
	I0203 12:28:37.690699   13136 command_runner.go:130] > .:53
	I0203 12:28:37.690737   13136 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3e8130cfa8e96169e54fdb81903f9b4680c96074b93281de316a617894d613269c265db78cbf1be00f04df6f27627d689838921ad115c7f1fadc26b632a43f17
	I0203 12:28:37.690737   13136 command_runner.go:130] > CoreDNS-1.11.3
	I0203 12:28:37.690737   13136 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0203 12:28:37.690737   13136 command_runner.go:130] > [INFO] 127.0.0.1:49536 - 20223 "HINFO IN 8316577845745372206.6425600211286211531. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049207769s
	I0203 12:28:37.691043   13136 logs.go:123] Gathering logs for kube-scheduler [2e43c2ecb4a9] ...
	I0203 12:28:37.691043   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e43c2ecb4a9"
	I0203 12:28:37.718519   13136 command_runner.go:130] ! I0203 12:27:23.141470       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:37.718519   13136 command_runner.go:130] ! W0203 12:27:24.897433       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0203 12:28:37.718519   13136 command_runner.go:130] ! W0203 12:27:24.897513       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:37.718519   13136 command_runner.go:130] ! W0203 12:27:24.897526       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0203 12:28:37.718519   13136 command_runner.go:130] ! W0203 12:27:24.897538       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0203 12:28:37.718519   13136 command_runner.go:130] ! I0203 12:27:25.033204       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0203 12:28:37.718519   13136 command_runner.go:130] ! I0203 12:27:25.033541       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:37.718519   13136 command_runner.go:130] ! I0203 12:27:25.041065       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0203 12:28:37.718519   13136 command_runner.go:130] ! I0203 12:27:25.044977       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:37.718724   13136 command_runner.go:130] ! I0203 12:27:25.045234       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 12:28:37.718724   13136 command_runner.go:130] ! I0203 12:27:25.045638       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:37.718724   13136 command_runner.go:130] ! I0203 12:27:25.146094       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:37.721236   13136 logs.go:123] Gathering logs for kindnet [fab2d9be6b5c] ...
	I0203 12:28:37.721313   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fab2d9be6b5c"
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:13:59.481747       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:13:59.482211       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:13:59.482302       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:09.479387       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:09.479438       1 main.go:301] handling current node
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:09.479457       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:09.479464       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:09.480145       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:09.480233       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:19.488038       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:19.488073       1 main.go:301] handling current node
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:19.488090       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:19.488096       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:19.488279       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:19.488286       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:29.479983       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:29.480097       1 main.go:301] handling current node
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:29.480118       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:29.480126       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:29.480690       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:29.480801       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:39.480046       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:39.480207       1 main.go:301] handling current node
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:39.480229       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:39.480240       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:39.480703       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:39.480794       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:49.479153       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:49.479261       1 main.go:301] handling current node
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:49.479283       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:49.479292       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:49.479491       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.757463   13136 command_runner.go:130] ! I0203 12:14:49.479575       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.758786   13136 command_runner.go:130] ! I0203 12:14:59.478982       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.758786   13136 command_runner.go:130] ! I0203 12:14:59.479132       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.758840   13136 command_runner.go:130] ! I0203 12:14:59.479435       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.758840   13136 command_runner.go:130] ! I0203 12:14:59.479519       1 main.go:301] handling current node
	I0203 12:28:37.758840   13136 command_runner.go:130] ! I0203 12:14:59.479535       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.758883   13136 command_runner.go:130] ! I0203 12:14:59.479541       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.758883   13136 command_runner.go:130] ! I0203 12:15:09.479541       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.758932   13136 command_runner.go:130] ! I0203 12:15:09.479593       1 main.go:301] handling current node
	I0203 12:28:37.758932   13136 command_runner.go:130] ! I0203 12:15:09.479613       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.758932   13136 command_runner.go:130] ! I0203 12:15:09.479621       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.758982   13136 command_runner.go:130] ! I0203 12:15:09.480303       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.758982   13136 command_runner.go:130] ! I0203 12:15:09.480382       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.759030   13136 command_runner.go:130] ! I0203 12:15:19.488389       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.759030   13136 command_runner.go:130] ! I0203 12:15:19.488489       1 main.go:301] handling current node
	I0203 12:28:37.759030   13136 command_runner.go:130] ! I0203 12:15:19.488509       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.759073   13136 command_runner.go:130] ! I0203 12:15:19.488517       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.759121   13136 command_runner.go:130] ! I0203 12:15:19.489046       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.759121   13136 command_runner.go:130] ! I0203 12:15:19.489142       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.759169   13136 command_runner.go:130] ! I0203 12:15:29.481025       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.759169   13136 command_runner.go:130] ! I0203 12:15:29.481131       1 main.go:301] handling current node
	I0203 12:28:37.759169   13136 command_runner.go:130] ! I0203 12:15:29.481151       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.759217   13136 command_runner.go:130] ! I0203 12:15:29.481158       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.759217   13136 command_runner.go:130] ! I0203 12:15:29.481350       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.759260   13136 command_runner.go:130] ! I0203 12:15:29.481373       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.759260   13136 command_runner.go:130] ! I0203 12:15:39.487726       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.759260   13136 command_runner.go:130] ! I0203 12:15:39.487893       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.759308   13136 command_runner.go:130] ! I0203 12:15:39.488092       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.759308   13136 command_runner.go:130] ! I0203 12:15:39.488105       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.759358   13136 command_runner.go:130] ! I0203 12:15:39.488232       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.759358   13136 command_runner.go:130] ! I0203 12:15:39.488259       1 main.go:301] handling current node
	I0203 12:28:37.759358   13136 command_runner.go:130] ! I0203 12:15:49.484117       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.759406   13136 command_runner.go:130] ! I0203 12:15:49.484177       1 main.go:301] handling current node
	I0203 12:28:37.759406   13136 command_runner.go:130] ! I0203 12:15:49.484234       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.759449   13136 command_runner.go:130] ! I0203 12:15:49.484314       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.759449   13136 command_runner.go:130] ! I0203 12:15:49.485204       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.759449   13136 command_runner.go:130] ! I0203 12:15:49.485392       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.759497   13136 command_runner.go:130] ! I0203 12:15:59.481092       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.759497   13136 command_runner.go:130] ! I0203 12:15:59.481195       1 main.go:301] handling current node
	I0203 12:28:37.759546   13136 command_runner.go:130] ! I0203 12:15:59.481218       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.759546   13136 command_runner.go:130] ! I0203 12:15:59.481226       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.759546   13136 command_runner.go:130] ! I0203 12:15:59.481484       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.759593   13136 command_runner.go:130] ! I0203 12:15:59.481510       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.759593   13136 command_runner.go:130] ! I0203 12:16:09.480009       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.759636   13136 command_runner.go:130] ! I0203 12:16:09.480236       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.759636   13136 command_runner.go:130] ! I0203 12:16:09.480645       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.759682   13136 command_runner.go:130] ! I0203 12:16:09.480840       1 main.go:301] handling current node
	I0203 12:28:37.759682   13136 command_runner.go:130] ! I0203 12:16:09.480969       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.759730   13136 command_runner.go:130] ! I0203 12:16:09.481255       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.759730   13136 command_runner.go:130] ! I0203 12:16:19.479435       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.759730   13136 command_runner.go:130] ! I0203 12:16:19.479557       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.759777   13136 command_runner.go:130] ! I0203 12:16:19.479760       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.759777   13136 command_runner.go:130] ! I0203 12:16:19.479977       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.759819   13136 command_runner.go:130] ! I0203 12:16:19.480328       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.759819   13136 command_runner.go:130] ! I0203 12:16:19.480522       1 main.go:301] handling current node
	I0203 12:28:37.759819   13136 command_runner.go:130] ! I0203 12:16:29.479113       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.759868   13136 command_runner.go:130] ! I0203 12:16:29.479221       1 main.go:301] handling current node
	I0203 12:28:37.759868   13136 command_runner.go:130] ! I0203 12:16:29.479267       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.759868   13136 command_runner.go:130] ! I0203 12:16:29.479321       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.759918   13136 command_runner.go:130] ! I0203 12:16:29.479572       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.759918   13136 command_runner.go:130] ! I0203 12:16:29.479670       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.759965   13136 command_runner.go:130] ! I0203 12:16:39.484562       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.759965   13136 command_runner.go:130] ! I0203 12:16:39.484671       1 main.go:301] handling current node
	I0203 12:28:37.760008   13136 command_runner.go:130] ! I0203 12:16:39.484693       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.760055   13136 command_runner.go:130] ! I0203 12:16:39.484700       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.760055   13136 command_runner.go:130] ! I0203 12:16:39.485166       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.760055   13136 command_runner.go:130] ! I0203 12:16:39.485259       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.760106   13136 command_runner.go:130] ! I0203 12:16:49.488261       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.760106   13136 command_runner.go:130] ! I0203 12:16:49.488416       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.760106   13136 command_runner.go:130] ! I0203 12:16:49.488709       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.760153   13136 command_runner.go:130] ! I0203 12:16:49.488783       1 main.go:301] handling current node
	I0203 12:28:37.760153   13136 command_runner.go:130] ! I0203 12:16:49.488801       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.760153   13136 command_runner.go:130] ! I0203 12:16:49.488807       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.760195   13136 command_runner.go:130] ! I0203 12:16:59.479138       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.760195   13136 command_runner.go:130] ! I0203 12:16:59.479218       1 main.go:301] handling current node
	I0203 12:28:37.760242   13136 command_runner.go:130] ! I0203 12:16:59.479312       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.760291   13136 command_runner.go:130] ! I0203 12:16:59.479448       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.760291   13136 command_runner.go:130] ! I0203 12:16:59.480031       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.760291   13136 command_runner.go:130] ! I0203 12:16:59.480132       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.760339   13136 command_runner.go:130] ! I0203 12:17:09.479412       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.760339   13136 command_runner.go:130] ! I0203 12:17:09.479454       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.760382   13136 command_runner.go:130] ! I0203 12:17:09.479652       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.760382   13136 command_runner.go:130] ! I0203 12:17:09.479680       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.760430   13136 command_runner.go:130] ! I0203 12:17:09.479774       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.760430   13136 command_runner.go:130] ! I0203 12:17:09.479785       1 main.go:301] handling current node
	I0203 12:28:37.760430   13136 command_runner.go:130] ! I0203 12:17:19.481248       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.760478   13136 command_runner.go:130] ! I0203 12:17:19.481299       1 main.go:301] handling current node
	I0203 12:28:37.760478   13136 command_runner.go:130] ! I0203 12:17:19.481317       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.760478   13136 command_runner.go:130] ! I0203 12:17:19.481324       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.760527   13136 command_runner.go:130] ! I0203 12:17:19.481727       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.760527   13136 command_runner.go:130] ! I0203 12:17:19.481754       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.760571   13136 command_runner.go:130] ! I0203 12:17:29.479244       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.760571   13136 command_runner.go:130] ! I0203 12:17:29.479364       1 main.go:301] handling current node
	I0203 12:28:37.760571   13136 command_runner.go:130] ! I0203 12:17:29.479384       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.760619   13136 command_runner.go:130] ! I0203 12:17:29.479392       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.760619   13136 command_runner.go:130] ! I0203 12:17:29.480340       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.760668   13136 command_runner.go:130] ! I0203 12:17:29.480488       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.760668   13136 command_runner.go:130] ! I0203 12:17:39.486004       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.760668   13136 command_runner.go:130] ! I0203 12:17:39.486109       1 main.go:301] handling current node
	I0203 12:28:37.760715   13136 command_runner.go:130] ! I0203 12:17:39.486129       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.760715   13136 command_runner.go:130] ! I0203 12:17:39.486137       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.760758   13136 command_runner.go:130] ! I0203 12:17:39.487056       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.760758   13136 command_runner.go:130] ! I0203 12:17:39.487145       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.760758   13136 command_runner.go:130] ! I0203 12:17:49.479174       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.760805   13136 command_runner.go:130] ! I0203 12:17:49.479407       1 main.go:301] handling current node
	I0203 12:28:37.760805   13136 command_runner.go:130] ! I0203 12:17:49.479529       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.760805   13136 command_runner.go:130] ! I0203 12:17:49.479564       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.760855   13136 command_runner.go:130] ! I0203 12:17:49.480448       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.760855   13136 command_runner.go:130] ! I0203 12:17:49.480489       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.760903   13136 command_runner.go:130] ! I0203 12:17:59.479178       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.760903   13136 command_runner.go:130] ! I0203 12:17:59.479464       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.760945   13136 command_runner.go:130] ! I0203 12:17:59.479683       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.760945   13136 command_runner.go:130] ! I0203 12:17:59.479843       1 main.go:301] handling current node
	I0203 12:28:37.760993   13136 command_runner.go:130] ! I0203 12:17:59.479900       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.760993   13136 command_runner.go:130] ! I0203 12:17:59.479909       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.760993   13136 command_runner.go:130] ! I0203 12:18:09.479760       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.761041   13136 command_runner.go:130] ! I0203 12:18:09.479855       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.761041   13136 command_runner.go:130] ! I0203 12:18:09.480291       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.761041   13136 command_runner.go:130] ! I0203 12:18:09.480340       1 main.go:301] handling current node
	I0203 12:28:37.761089   13136 command_runner.go:130] ! I0203 12:18:09.480365       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.761089   13136 command_runner.go:130] ! I0203 12:18:09.480374       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.761132   13136 command_runner.go:130] ! I0203 12:18:19.487177       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.761132   13136 command_runner.go:130] ! I0203 12:18:19.487393       1 main.go:301] handling current node
	I0203 12:28:37.761132   13136 command_runner.go:130] ! I0203 12:18:19.487478       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.761180   13136 command_runner.go:130] ! I0203 12:18:19.487578       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.761180   13136 command_runner.go:130] ! I0203 12:18:19.488002       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.761229   13136 command_runner.go:130] ! I0203 12:18:19.488201       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.761229   13136 command_runner.go:130] ! I0203 12:18:29.479665       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.761229   13136 command_runner.go:130] ! I0203 12:18:29.479790       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.761276   13136 command_runner.go:130] ! I0203 12:18:29.480229       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.761276   13136 command_runner.go:130] ! I0203 12:18:29.480333       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.761319   13136 command_runner.go:130] ! I0203 12:18:29.480694       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.761319   13136 command_runner.go:130] ! I0203 12:18:29.480800       1 main.go:301] handling current node
	I0203 12:28:37.761319   13136 command_runner.go:130] ! I0203 12:18:39.478894       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.761366   13136 command_runner.go:130] ! I0203 12:18:39.479048       1 main.go:301] handling current node
	I0203 12:28:37.761366   13136 command_runner.go:130] ! I0203 12:18:39.479069       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.761366   13136 command_runner.go:130] ! I0203 12:18:39.479077       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.761416   13136 command_runner.go:130] ! I0203 12:18:39.479735       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.761416   13136 command_runner.go:130] ! I0203 12:18:39.479846       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.761464   13136 command_runner.go:130] ! I0203 12:18:49.487084       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.761464   13136 command_runner.go:130] ! I0203 12:18:49.487121       1 main.go:301] handling current node
	I0203 12:28:37.761464   13136 command_runner.go:130] ! I0203 12:18:49.487139       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.761507   13136 command_runner.go:130] ! I0203 12:18:49.487146       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.761507   13136 command_runner.go:130] ! I0203 12:18:49.487825       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.761555   13136 command_runner.go:130] ! I0203 12:18:49.488251       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.761555   13136 command_runner.go:130] ! I0203 12:18:59.479844       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.761603   13136 command_runner.go:130] ! I0203 12:18:59.479986       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.761603   13136 command_runner.go:130] ! I0203 12:18:59.480763       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.761603   13136 command_runner.go:130] ! I0203 12:18:59.480852       1 main.go:301] handling current node
	I0203 12:28:37.761650   13136 command_runner.go:130] ! I0203 12:18:59.480911       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.761650   13136 command_runner.go:130] ! I0203 12:18:59.480921       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.761694   13136 command_runner.go:130] ! I0203 12:19:09.479931       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.761694   13136 command_runner.go:130] ! I0203 12:19:09.480043       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.761694   13136 command_runner.go:130] ! I0203 12:19:09.480242       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.761741   13136 command_runner.go:130] ! I0203 12:19:09.480487       1 main.go:301] handling current node
	I0203 12:28:37.761741   13136 command_runner.go:130] ! I0203 12:19:09.480506       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.761790   13136 command_runner.go:130] ! I0203 12:19:09.480516       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.761838   13136 command_runner.go:130] ! I0203 12:19:19.486529       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.761838   13136 command_runner.go:130] ! I0203 12:19:19.486564       1 main.go:301] handling current node
	I0203 12:28:37.761881   13136 command_runner.go:130] ! I0203 12:19:19.486583       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.761881   13136 command_runner.go:130] ! I0203 12:19:19.486590       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.761881   13136 command_runner.go:130] ! I0203 12:19:19.486994       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.761929   13136 command_runner.go:130] ! I0203 12:19:19.487009       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.761929   13136 command_runner.go:130] ! I0203 12:19:29.480898       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.761929   13136 command_runner.go:130] ! I0203 12:19:29.481006       1 main.go:301] handling current node
	I0203 12:28:37.761979   13136 command_runner.go:130] ! I0203 12:19:29.481028       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.761979   13136 command_runner.go:130] ! I0203 12:19:29.481037       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.762027   13136 command_runner.go:130] ! I0203 12:19:29.481233       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.762027   13136 command_runner.go:130] ! I0203 12:19:29.481256       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.762070   13136 command_runner.go:130] ! I0203 12:19:39.486219       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.762070   13136 command_runner.go:130] ! I0203 12:19:39.486253       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.762070   13136 command_runner.go:130] ! I0203 12:19:39.486535       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.762117   13136 command_runner.go:130] ! I0203 12:19:39.486547       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.762117   13136 command_runner.go:130] ! I0203 12:19:39.486661       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.762117   13136 command_runner.go:130] ! I0203 12:19:39.486668       1 main.go:301] handling current node
	I0203 12:28:37.762166   13136 command_runner.go:130] ! I0203 12:19:49.486894       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.762166   13136 command_runner.go:130] ! I0203 12:19:49.487004       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.762213   13136 command_runner.go:130] ! I0203 12:19:49.487855       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.762213   13136 command_runner.go:130] ! I0203 12:19:49.488255       1 main.go:301] handling current node
	I0203 12:28:37.762255   13136 command_runner.go:130] ! I0203 12:19:49.488415       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.762255   13136 command_runner.go:130] ! I0203 12:19:49.488578       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.762255   13136 command_runner.go:130] ! I0203 12:19:59.480029       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.762302   13136 command_runner.go:130] ! I0203 12:19:59.480068       1 main.go:301] handling current node
	I0203 12:28:37.762302   13136 command_runner.go:130] ! I0203 12:19:59.480087       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.762302   13136 command_runner.go:130] ! I0203 12:19:59.480095       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.762352   13136 command_runner.go:130] ! I0203 12:19:59.480976       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.762352   13136 command_runner.go:130] ! I0203 12:19:59.481279       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.762400   13136 command_runner.go:130] ! I0203 12:20:09.480108       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.762400   13136 command_runner.go:130] ! I0203 12:20:09.480217       1 main.go:301] handling current node
	I0203 12:28:37.762400   13136 command_runner.go:130] ! I0203 12:20:09.480237       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.762442   13136 command_runner.go:130] ! I0203 12:20:09.480245       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.762442   13136 command_runner.go:130] ! I0203 12:20:09.480661       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.762442   13136 command_runner.go:130] ! I0203 12:20:09.480744       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.762489   13136 command_runner.go:130] ! I0203 12:20:19.479758       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.762537   13136 command_runner.go:130] ! I0203 12:20:19.480248       1 main.go:301] handling current node
	I0203 12:28:37.762537   13136 command_runner.go:130] ! I0203 12:20:19.480343       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.762537   13136 command_runner.go:130] ! I0203 12:20:19.480356       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.762585   13136 command_runner.go:130] ! I0203 12:20:19.480786       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.762585   13136 command_runner.go:130] ! I0203 12:20:19.480803       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.762631   13136 command_runner.go:130] ! I0203 12:20:29.479490       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.762631   13136 command_runner.go:130] ! I0203 12:20:29.479617       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.762631   13136 command_runner.go:130] ! I0203 12:20:29.480064       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.762679   13136 command_runner.go:130] ! I0203 12:20:29.480169       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.762679   13136 command_runner.go:130] ! I0203 12:20:29.480353       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.762679   13136 command_runner.go:130] ! I0203 12:20:29.480368       1 main.go:301] handling current node
	I0203 12:28:37.762728   13136 command_runner.go:130] ! I0203 12:20:39.479641       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.762728   13136 command_runner.go:130] ! I0203 12:20:39.479836       1 main.go:301] handling current node
	I0203 12:28:37.762776   13136 command_runner.go:130] ! I0203 12:20:39.479918       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.762776   13136 command_runner.go:130] ! I0203 12:20:39.480224       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.762819   13136 command_runner.go:130] ! I0203 12:20:39.480721       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.762819   13136 command_runner.go:130] ! I0203 12:20:39.480751       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.762867   13136 command_runner.go:130] ! I0203 12:20:49.479128       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.762867   13136 command_runner.go:130] ! I0203 12:20:49.479242       1 main.go:301] handling current node
	I0203 12:28:37.762867   13136 command_runner.go:130] ! I0203 12:20:49.479263       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.762915   13136 command_runner.go:130] ! I0203 12:20:49.479271       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.762915   13136 command_runner.go:130] ! I0203 12:20:49.479687       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.762915   13136 command_runner.go:130] ! I0203 12:20:49.479937       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.762964   13136 command_runner.go:130] ! I0203 12:20:59.485967       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.762964   13136 command_runner.go:130] ! I0203 12:20:59.486008       1 main.go:301] handling current node
	I0203 12:28:37.763006   13136 command_runner.go:130] ! I0203 12:20:59.486029       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.763006   13136 command_runner.go:130] ! I0203 12:20:59.486037       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.763053   13136 command_runner.go:130] ! I0203 12:20:59.486327       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763053   13136 command_runner.go:130] ! I0203 12:20:59.486342       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763053   13136 command_runner.go:130] ! I0203 12:21:09.479406       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.763103   13136 command_runner.go:130] ! I0203 12:21:09.479537       1 main.go:301] handling current node
	I0203 12:28:37.763103   13136 command_runner.go:130] ! I0203 12:21:09.479560       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.763103   13136 command_runner.go:130] ! I0203 12:21:09.479571       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.763150   13136 command_runner.go:130] ! I0203 12:21:09.480561       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763150   13136 command_runner.go:130] ! I0203 12:21:09.480668       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763150   13136 command_runner.go:130] ! I0203 12:21:19.486059       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.763192   13136 command_runner.go:130] ! I0203 12:21:19.486172       1 main.go:301] handling current node
	I0203 12:28:37.763192   13136 command_runner.go:130] ! I0203 12:21:19.486192       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.763240   13136 command_runner.go:130] ! I0203 12:21:19.486199       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.763289   13136 command_runner.go:130] ! I0203 12:21:19.486776       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763289   13136 command_runner.go:130] ! I0203 12:21:19.486913       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763289   13136 command_runner.go:130] ! I0203 12:21:29.479291       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.763336   13136 command_runner.go:130] ! I0203 12:21:29.479421       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.763336   13136 command_runner.go:130] ! I0203 12:21:29.480168       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763379   13136 command_runner.go:130] ! I0203 12:21:29.480268       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763379   13136 command_runner.go:130] ! I0203 12:21:29.480621       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.763379   13136 command_runner.go:130] ! I0203 12:21:29.480720       1 main.go:301] handling current node
	I0203 12:28:37.763426   13136 command_runner.go:130] ! I0203 12:21:39.479561       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763426   13136 command_runner.go:130] ! I0203 12:21:39.479684       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763476   13136 command_runner.go:130] ! I0203 12:21:39.480019       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.763476   13136 command_runner.go:130] ! I0203 12:21:39.480130       1 main.go:301] handling current node
	I0203 12:28:37.763476   13136 command_runner.go:130] ! I0203 12:21:39.480149       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.763476   13136 command_runner.go:130] ! I0203 12:21:39.480157       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.763533   13136 command_runner.go:130] ! I0203 12:21:49.485937       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.763533   13136 command_runner.go:130] ! I0203 12:21:49.486015       1 main.go:301] handling current node
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:21:49.486511       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:21:49.486846       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:21:49.487441       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:21:49.487470       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:21:59.479224       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:21:59.479388       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:21:59.479615       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:21:59.479639       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:21:59.479828       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:21:59.479942       1 main.go:301] handling current node
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:09.479352       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:09.479745       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:09.480390       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:09.480426       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:09.480922       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:09.481129       1 main.go:301] handling current node
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:19.480040       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:19.480088       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:19.480938       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:19.480972       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:19.481966       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:19.482194       1 main.go:301] handling current node
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:29.479113       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:29.479222       1 main.go:301] handling current node
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:29.479243       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:29.479251       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:29.479605       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:29.479637       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:39.488770       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:39.488806       1 main.go:301] handling current node
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:39.488823       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:39.488830       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:39.489296       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:39.489449       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:49.479056       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:49.479097       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:37.763576   13136 command_runner.go:130] ! I0203 12:22:49.479550       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764104   13136 command_runner.go:130] ! I0203 12:22:49.479661       1 main.go:301] handling current node
	I0203 12:28:37.764104   13136 command_runner.go:130] ! I0203 12:22:49.479679       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764104   13136 command_runner.go:130] ! I0203 12:22:49.479687       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764145   13136 command_runner.go:130] ! I0203 12:22:59.478931       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764145   13136 command_runner.go:130] ! I0203 12:22:59.479023       1 main.go:301] handling current node
	I0203 12:28:37.764145   13136 command_runner.go:130] ! I0203 12:22:59.479077       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764191   13136 command_runner.go:130] ! I0203 12:22:59.479136       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764191   13136 command_runner.go:130] ! I0203 12:22:59.479510       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:22:59.479604       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:22:59.479991       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.0.54 Flags: [] Table: 0 Realm: 0} 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:09.479836       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:09.479965       1 main.go:301] handling current node
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:09.479985       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:09.479997       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:09.480363       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:09.480514       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:19.480167       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:19.480217       1 main.go:301] handling current node
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:19.480239       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:19.480245       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:19.480628       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:19.480750       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:29.488733       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:29.489234       1 main.go:301] handling current node
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:29.489474       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:29.489946       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:29.490535       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:29.490635       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:39.479240       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:39.479359       1 main.go:301] handling current node
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:39.479382       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:39.479391       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:39.479635       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:39.479662       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:49.484665       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:49.484760       1 main.go:301] handling current node
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:49.484814       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:49.484827       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:49.485522       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:49.485609       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:59.488178       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:59.488328       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:59.488725       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:59.488825       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:59.489199       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:23:59.489288       1 main.go:301] handling current node
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:24:09.478924       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:24:09.478990       1 main.go:301] handling current node
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:24:09.479043       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:24:09.479072       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:24:09.479342       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:24:09.479511       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764225   13136 command_runner.go:130] ! I0203 12:24:19.485161       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764751   13136 command_runner.go:130] ! I0203 12:24:19.485331       1 main.go:301] handling current node
	I0203 12:28:37.764751   13136 command_runner.go:130] ! I0203 12:24:19.485367       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764792   13136 command_runner.go:130] ! I0203 12:24:19.485388       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764792   13136 command_runner.go:130] ! I0203 12:24:19.486434       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764792   13136 command_runner.go:130] ! I0203 12:24:19.486547       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764836   13136 command_runner.go:130] ! I0203 12:24:29.479544       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764836   13136 command_runner.go:130] ! I0203 12:24:29.480058       1 main.go:301] handling current node
	I0203 12:28:37.764836   13136 command_runner.go:130] ! I0203 12:24:29.480294       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:29.480571       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:29.482395       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:29.482495       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:39.487057       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:39.487164       1 main.go:301] handling current node
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:39.487184       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:39.487192       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:39.487371       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:39.487395       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:49.479049       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:49.479126       1 main.go:301] handling current node
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:49.479266       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:49.479354       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:49.480131       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:49.480242       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:59.479305       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:59.479727       1 main.go:301] handling current node
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:59.479826       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:59.479839       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:59.480314       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:37.764869   13136 command_runner.go:130] ! I0203 12:24:59.480509       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:37.782138   13136 logs.go:123] Gathering logs for container status ...
	I0203 12:28:37.782138   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 12:28:37.845837   13136 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0203 12:28:37.845837   13136 command_runner.go:130] > edb5f00f10420       c69fa2e9cbf5f                                                                                         7 seconds ago        Running             coredns                   1                   ac5f0bf5197cf       coredns-668d6bf9bc-v2gkp
	I0203 12:28:37.845837   13136 command_runner.go:130] > 0ff3e07f2982f       8c811b4aec35f                                                                                         7 seconds ago        Running             busybox                   1                   d290c79ddbf8d       busybox-58667487b6-zgvmd
	I0203 12:28:37.845837   13136 command_runner.go:130] > 7cbc7a552a4c3       6e38f40d628db                                                                                         27 seconds ago       Running             storage-provisioner       2                   1eece224f54eb       storage-provisioner
	I0203 12:28:37.845837   13136 command_runner.go:130] > 644890f5738e5       d300845f67aeb                                                                                         About a minute ago   Running             kindnet-cni               1                   c682ff8834bf4       kindnet-h6m57
	I0203 12:28:37.845837   13136 command_runner.go:130] > edf3d4284acbb       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   1eece224f54eb       storage-provisioner
	I0203 12:28:37.845837   13136 command_runner.go:130] > cf33452e72443       e29f9c7391fd9                                                                                         About a minute ago   Running             kube-proxy                1                   c4912e7d3383e       kube-proxy-9g92t
	I0203 12:28:37.845837   13136 command_runner.go:130] > 09707a8629658       a9e7e6b294baf                                                                                         About a minute ago   Running             etcd                      0                   fc833a943f11f       etcd-multinode-749300
	I0203 12:28:37.845837   13136 command_runner.go:130] > 2e43c2ecb4a92       2b0d6572d062c                                                                                         About a minute ago   Running             kube-scheduler            1                   e2da6b5a5bd1b       kube-scheduler-multinode-749300
	I0203 12:28:37.845837   13136 command_runner.go:130] > fa5ab1df89857       019ee182b58e2                                                                                         About a minute ago   Running             kube-controller-manager   1                   d8732fe7d2435       kube-controller-manager-multinode-749300
	I0203 12:28:37.845837   13136 command_runner.go:130] > 6c19e0a0ba9c0       95c0bda56fc4d                                                                                         About a minute ago   Running             kube-apiserver            0                   264f9c1c2c05f       kube-apiserver-multinode-749300
	I0203 12:28:37.845837   13136 command_runner.go:130] > f42690726d50f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   efcd217a3204d       busybox-58667487b6-zgvmd
	I0203 12:28:37.845837   13136 command_runner.go:130] > fe91a8d012aee       c69fa2e9cbf5f                                                                                         23 minutes ago       Exited              coredns                   0                   26e5557dc32ce       coredns-668d6bf9bc-v2gkp
	I0203 12:28:37.846368   13136 command_runner.go:130] > fab2d9be6b5c7       kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26              23 minutes ago       Exited              kindnet-cni               0                   cb49b32ba0852       kindnet-h6m57
	I0203 12:28:37.846412   13136 command_runner.go:130] > c6dc514e98f69       e29f9c7391fd9                                                                                         23 minutes ago       Exited              kube-proxy                0                   1ff01fa7d8c67       kube-proxy-9g92t
	I0203 12:28:37.846412   13136 command_runner.go:130] > 8ade10c0fb096       019ee182b58e2                                                                                         23 minutes ago       Exited              kube-controller-manager   0                   b1b473818438d       kube-controller-manager-multinode-749300
	I0203 12:28:37.846412   13136 command_runner.go:130] > 88c40ca9aa3cb       2b0d6572d062c                                                                                         23 minutes ago       Exited              kube-scheduler            0                   d8d9e598659ff       kube-scheduler-multinode-749300
	I0203 12:28:40.350062   13136 api_server.go:253] Checking apiserver healthz at https://172.25.12.244:8443/healthz ...
	I0203 12:28:40.358129   13136 api_server.go:279] https://172.25.12.244:8443/healthz returned 200:
	ok
	I0203 12:28:40.358387   13136 round_trippers.go:463] GET https://172.25.12.244:8443/version
	I0203 12:28:40.358387   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:40.358426   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:40.358426   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:40.360856   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:28:40.360856   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:40.360954   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:40.360954   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:40.360954   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:40.360954   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:40.360954   13136 round_trippers.go:580]     Content-Length: 263
	I0203 12:28:40.360954   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:40 GMT
	I0203 12:28:40.360954   13136 round_trippers.go:580]     Audit-Id: fc39d40c-2ddd-4920-8f6d-faabd6c24e11
	I0203 12:28:40.360954   13136 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "32",
	  "gitVersion": "v1.32.1",
	  "gitCommit": "e9c9be4007d1664e68796af02b8978640d2c1b26",
	  "gitTreeState": "clean",
	  "buildDate": "2025-01-15T14:31:55Z",
	  "goVersion": "go1.23.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0203 12:28:40.361062   13136 api_server.go:141] control plane version: v1.32.1
	I0203 12:28:40.361062   13136 api_server.go:131] duration metric: took 3.7242091s to wait for apiserver health ...
	I0203 12:28:40.361062   13136 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 12:28:40.367792   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 12:28:40.398296   13136 command_runner.go:130] > 6c19e0a0ba9c
	I0203 12:28:40.398296   13136 logs.go:282] 1 containers: [6c19e0a0ba9c]
	I0203 12:28:40.406134   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 12:28:40.430191   13136 command_runner.go:130] > 09707a862965
	I0203 12:28:40.430191   13136 logs.go:282] 1 containers: [09707a862965]
	I0203 12:28:40.436999   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 12:28:40.465065   13136 command_runner.go:130] > edb5f00f1042
	I0203 12:28:40.465710   13136 command_runner.go:130] > fe91a8d012ae
	I0203 12:28:40.465710   13136 logs.go:282] 2 containers: [edb5f00f1042 fe91a8d012ae]
	I0203 12:28:40.472612   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 12:28:40.500066   13136 command_runner.go:130] > 2e43c2ecb4a9
	I0203 12:28:40.500098   13136 command_runner.go:130] > 88c40ca9aa3c
	I0203 12:28:40.500134   13136 logs.go:282] 2 containers: [2e43c2ecb4a9 88c40ca9aa3c]
	I0203 12:28:40.507740   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 12:28:40.534077   13136 command_runner.go:130] > cf33452e7244
	I0203 12:28:40.534122   13136 command_runner.go:130] > c6dc514e98f6
	I0203 12:28:40.534122   13136 logs.go:282] 2 containers: [cf33452e7244 c6dc514e98f6]
	I0203 12:28:40.540305   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 12:28:40.564122   13136 command_runner.go:130] > fa5ab1df8985
	I0203 12:28:40.564122   13136 command_runner.go:130] > 8ade10c0fb09
	I0203 12:28:40.564211   13136 logs.go:282] 2 containers: [fa5ab1df8985 8ade10c0fb09]
	I0203 12:28:40.571089   13136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0203 12:28:40.604629   13136 command_runner.go:130] > 644890f5738e
	I0203 12:28:40.604629   13136 command_runner.go:130] > fab2d9be6b5c
	I0203 12:28:40.606436   13136 logs.go:282] 2 containers: [644890f5738e fab2d9be6b5c]
	I0203 12:28:40.606571   13136 logs.go:123] Gathering logs for kubelet ...
	I0203 12:28:40.606571   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:15 multinode-749300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: I0203 12:27:16.085338    1502 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: I0203 12:27:16.085444    1502 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: I0203 12:27:16.086383    1502 server.go:954] "Client rotation is on, will bootstrap in background"
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1502]: E0203 12:27:16.086828    1502 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: I0203 12:27:16.848200    1552 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0203 12:28:40.635047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: I0203 12:27:16.848394    1552 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: I0203 12:27:16.848741    1552 server.go:954] "Client rotation is on, will bootstrap in background"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 kubelet[1552]: E0203 12:27:16.848794    1552 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:16 multinode-749300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:17 multinode-749300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.655843    1646 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.655920    1646 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.656491    1646 server.go:954] "Client rotation is on, will bootstrap in background"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.660314    1646 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.685411    1646 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.712367    1646 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.712421    1646 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.719067    1646 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.719190    1646 server.go:841] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720010    1646 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720060    1646 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-749300","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720250    1646 topology_manager.go:138] "Creating topology manager with none policy"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720261    1646 container_manager_linux.go:304] "Creating device plugin manager"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.720394    1646 state_mem.go:36] "Initialized new in-memory state store"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722746    1646 kubelet.go:446] "Attempting to sync node with API server"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722858    1646 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722878    1646 kubelet.go:352] "Adding apiserver pod source"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.722889    1646 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.728476    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.728558    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.730384    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.730414    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.730516    1646 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="docker" version="27.4.0" apiVersion="v1"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.732095    1646 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.732504    1646 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.737572    1646 watchdog_linux.go:99] "Systemd watchdog is not enabled"
	I0203 12:28:40.636047   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.737778    1646 server.go:1287] "Started kubelet"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.742490    1646 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.747263    1646 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.25.12.244:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-749300.1820b26d8c29f858  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-749300,UID:multinode-749300,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-749300,},FirstTimestamp:2025-02-03 12:27:19.73775164 +0000 UTC m=+0.175845113,LastTimestamp:2025-02-03 12:27:19.73775164 +0000 UTC m=+0.175845113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-7493
00,}"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.753450    1646 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.755438    1646 server.go:490] "Adding debug handlers to kubelet server"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.757330    1646 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.759063    1646 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.759618    1646 volume_manager.go:297] "Starting Kubelet Volume Manager"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.760084    1646 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.760301    1646 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-749300\" not found"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.763820    1646 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.766190    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="200ms"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.775750    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.775896    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.776304    1646 factory.go:221] Registration of the systemd container factory successfully
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.776423    1646 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.776477    1646 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.822393    1646 cpu_manager.go:221] "Starting CPU manager" policy="none"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.822414    1646 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.822433    1646 state_mem.go:36] "Initialized new in-memory state store"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823729    1646 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823782    1646 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823807    1646 policy_none.go:49] "None policy: Start"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823820    1646 memory_manager.go:186] "Starting memorymanager" policy="None"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.823833    1646 state_mem.go:35] "Initializing new in-memory state store"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.824575    1646 state_mem.go:75] "Updated machine memory state"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.827550    1646 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.828214    1646 eviction_manager.go:189] "Eviction manager: starting control loop"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.828323    1646 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.834439    1646 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.836223    1646 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.836276    1646 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-749300\" not found"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.839763    1646 reconciler.go:26] "Reconciler: start to sync state"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.849152    1646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.851786    1646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.851873    1646 status_manager.go:227] "Starting to sync pod status with apiserver"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.852167    1646 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.852266    1646 kubelet.go:2388] "Starting kubelet main sync loop"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.852425    1646 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: W0203 12:27:19.857733    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.857872    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.865017    1646 iptables.go:577] "Could not set up iptables canary" err=<
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0203 12:28:40.637046   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.930098    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.931495    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.959594    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.959988    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ff01fa7d8c67a792cac128e6be46aba4b9713e4a6cd005178a2573c7a847c7a"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965523    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1b473818438dbd2e6a91783e24fae500384dbe88b88a3ed9dd8d9c8f4724a7a"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965561    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16d03cfd685dc52d880c67a5a5040dfd6dcf7d2477c368b0b221099fe19d0fc3"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965576    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8d9e598659ff21f0255dbdf0fe1e487760842b470492b0b4377fb2491bf3f17"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.965587    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3c93fcfaa46c30cca46747853d168923992fa34e3ab48bd74f55818221180a9"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.966435    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: E0203 12:27:19.969099    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="400ms"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.969271    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efcd217a3204d8ee4b03ebb412109a32b1b008fc65b7434e2087e8fa5429c03b"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 kubelet[1646]: I0203 12:27:19.994181    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26e5557dc32ce42e41eb095169017d71cd452b2e90ecede8972ab6dfa8c841ac"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.008325    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a166f3c8776d2abb8f173e76ba48d9aa5c71b04d34638145a7d22b947e0b1e16"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.024782    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb49b32ba0852c35cd9bd014b8dc9ccfc93a2c6a7d911bdd6baaba575c4e1d80"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.026552    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.027031    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046040    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-kubeconfig\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046195    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046258    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a4dc8a8db691940bb17375ec22c0921e-kubeconfig\") pod \"kube-scheduler-multinode-749300\" (UID: \"a4dc8a8db691940bb17375ec22c0921e\") " pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046319    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/f85eb916773a482447e41aa40aaff233-etcd-certs\") pod \"etcd-multinode-749300\" (UID: \"f85eb916773a482447e41aa40aaff233\") " pod="kube-system/etcd-multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046369    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20275825c8d44051c01f8d920b297acd-ca-certs\") pod \"kube-apiserver-multinode-749300\" (UID: \"20275825c8d44051c01f8d920b297acd\") " pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046389    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20275825c8d44051c01f8d920b297acd-k8s-certs\") pod \"kube-apiserver-multinode-749300\" (UID: \"20275825c8d44051c01f8d920b297acd\") " pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046407    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20275825c8d44051c01f8d920b297acd-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-749300\" (UID: \"20275825c8d44051c01f8d920b297acd\") " pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046425    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-ca-certs\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046445    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/f85eb916773a482447e41aa40aaff233-etcd-data\") pod \"etcd-multinode-749300\" (UID: \"f85eb916773a482447e41aa40aaff233\") " pod="kube-system/etcd-multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046466    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-flexvolume-dir\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.046483    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c25845f184856fc216b76acafcf34ee9-k8s-certs\") pod \"kube-controller-manager-multinode-749300\" (UID: \"c25845f184856fc216b76acafcf34ee9\") " pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.134568    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.136458    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.371298    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="800ms"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: I0203 12:27:20.537888    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:40.638050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.538850    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: W0203 12:27:20.642530    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.642673    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: W0203 12:27:20.718728    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.718775    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: W0203 12:27:20.727487    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 kubelet[1646]: E0203 12:27:20.727666    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-749300&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: I0203 12:27:21.096615    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2da6b5a5bd1b22ed0d0ef9ab7fd9a0874f1357443511e898b07fbae5f28d3d0"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: I0203 12:27:21.117402    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc833a943f11f228aa4ef7daceca6bf4fd4096e22ee6354cc8afb177b0dc3db5"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: E0203 12:27:21.172766    1646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-749300?timeout=10s\": dial tcp 172.25.12.244:8443: connect: connection refused" interval="1.6s"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: W0203 12:27:21.239099    1646 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.25.12.244:8443: connect: connection refused
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: E0203 12:27:21.239402    1646 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.25.12.244:8443: connect: connection refused" logger="UnhandledError"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: I0203 12:27:21.341008    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 kubelet[1646]: E0203 12:27:21.342386    1646 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.25.12.244:8443: connect: connection refused" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.155943    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.168589    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.184520    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: E0203 12:27:22.192380    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:22 multinode-749300 kubelet[1646]: I0203 12:27:22.944384    1646 kubelet_node_status.go:76] "Attempting to register node" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.220031    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.221067    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.221592    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:23 multinode-749300 kubelet[1646]: E0203 12:27:23.222217    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: E0203 12:27:24.222471    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: E0203 12:27:24.222938    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: E0203 12:27:24.223334    1646 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-749300\" not found" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:24 multinode-749300 kubelet[1646]: I0203 12:27:24.962104    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.072863    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-multinode-749300\" already exists" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.072916    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.096600    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-multinode-749300\" already exists" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.096649    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.100835    1646 kubelet_node_status.go:125] "Node was previously registered" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.101001    1646 kubelet_node_status.go:79] "Successfully registered node" node="multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.101046    1646 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.102196    1646 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.103579    1646 setters.go:602] "Node became not ready" node="multinode-749300" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-03T12:27:25Z","lastTransitionTime":"2025-02-03T12:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.123635    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-multinode-749300\" already exists" pod="kube-system/kube-controller-manager-multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.123696    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.143136    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-749300\" already exists" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:40.639050   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.231645    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.250920    1646 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-749300\" already exists" pod="kube-system/kube-scheduler-multinode-749300"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.733100    1646 apiserver.go:52] "Watching apiserver"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.740335    1646 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-749300" podUID="b18ba461-b225-4090-8341-159171502b52"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.740880    1646 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-749300" podUID="c751851c-68ee-4c15-80ca-32642fcf2a5a"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.741767    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.743201    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.768020    1646 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.798228    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67c155d5-fb9b-42f5-8e64-865c44a5d4e6-xtables-lock\") pod \"kindnet-h6m57\" (UID: \"67c155d5-fb9b-42f5-8e64-865c44a5d4e6\") " pod="kube-system/kindnet-h6m57"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799102    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4c991afa-7bb0-4d52-bded-22d68037b5ae-tmp\") pod \"storage-provisioner\" (UID: \"4c991afa-7bb0-4d52-bded-22d68037b5ae\") " pod="kube-system/storage-provisioner"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799171    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1709b874-4fee-41f5-8d30-24912b2fa725-xtables-lock\") pod \"kube-proxy-9g92t\" (UID: \"1709b874-4fee-41f5-8d30-24912b2fa725\") " pod="kube-system/kube-proxy-9g92t"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799205    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1709b874-4fee-41f5-8d30-24912b2fa725-lib-modules\") pod \"kube-proxy-9g92t\" (UID: \"1709b874-4fee-41f5-8d30-24912b2fa725\") " pod="kube-system/kube-proxy-9g92t"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799246    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/67c155d5-fb9b-42f5-8e64-865c44a5d4e6-cni-cfg\") pod \"kindnet-h6m57\" (UID: \"67c155d5-fb9b-42f5-8e64-865c44a5d4e6\") " pod="kube-system/kindnet-h6m57"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799264    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67c155d5-fb9b-42f5-8e64-865c44a5d4e6-lib-modules\") pod \"kindnet-h6m57\" (UID: \"67c155d5-fb9b-42f5-8e64-865c44a5d4e6\") " pod="kube-system/kindnet-h6m57"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799337    1646 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.799426    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-749300"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.799386    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.800808    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:26.300655438 +0000 UTC m=+6.738748911 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.812299    1646 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.812369    1646 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-749300"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.843057    1646 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.862699    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.862730    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: E0203 12:27:25.862793    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:26.362774296 +0000 UTC m=+6.800867869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.898492    1646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8703dd831250f30e213efd5fca131d7" path="/var/lib/kubelet/pods/a8703dd831250f30e213efd5fca131d7/volumes"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.899802    1646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cea8016677ee73c66077ce584fb15354" path="/var/lib/kubelet/pods/cea8016677ee73c66077ce584fb15354/volumes"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.952875    1646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-749300" podStartSLOduration=0.952857614 podStartE2EDuration="952.857614ms" podCreationTimestamp="2025-02-03 12:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-03 12:27:25.937443526 +0000 UTC m=+6.375537099" watchObservedRunningTime="2025-02-03 12:27:25.952857614 +0000 UTC m=+6.390951187"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 kubelet[1646]: I0203 12:27:25.974229    1646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-749300" podStartSLOduration=0.974210637 podStartE2EDuration="974.210637ms" podCreationTimestamp="2025-02-03 12:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-03 12:27:25.953477018 +0000 UTC m=+6.391570591" watchObservedRunningTime="2025-02-03 12:27:25.974210637 +0000 UTC m=+6.412304110"
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.303818    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.303893    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:27.303876335 +0000 UTC m=+7.741969908 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.405407    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.405530    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 kubelet[1646]: E0203 12:27:26.405596    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:27.40557752 +0000 UTC m=+7.843670993 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.640049   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.315813    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.317831    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:29.317806871 +0000 UTC m=+9.755900344 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.416628    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.416661    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.416713    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:29.41669654 +0000 UTC m=+9.854790013 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.861806    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 kubelet[1646]: E0203 12:27:27.862570    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.336385    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.336563    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:33.336541991 +0000 UTC m=+13.774635464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.437576    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.437923    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.438074    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:33.438050975 +0000 UTC m=+13.876144448 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.853969    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:29 multinode-749300 kubelet[1646]: E0203 12:27:29.853720    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:31 multinode-749300 kubelet[1646]: E0203 12:27:31.852706    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:31 multinode-749300 kubelet[1646]: E0203 12:27:31.853391    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.369187    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.369409    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:41.369390703 +0000 UTC m=+21.807484276 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.470103    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.470221    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.470291    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:41.470271952 +0000 UTC m=+21.908365425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.853533    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:33 multinode-749300 kubelet[1646]: E0203 12:27:33.854435    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:35 multinode-749300 kubelet[1646]: E0203 12:27:35.853643    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:35 multinode-749300 kubelet[1646]: E0203 12:27:35.854148    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:37 multinode-749300 kubelet[1646]: E0203 12:27:37.852924    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:37 multinode-749300 kubelet[1646]: E0203 12:27:37.853434    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:39 multinode-749300 kubelet[1646]: E0203 12:27:39.861767    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:39 multinode-749300 kubelet[1646]: E0203 12:27:39.862616    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.448061    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.448222    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:57.44820293 +0000 UTC m=+37.886296403 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.549425    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.641039   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.549465    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.549520    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:27:57.549504632 +0000 UTC m=+37.987598205 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.852817    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:41 multinode-749300 kubelet[1646]: E0203 12:27:41.853419    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:43 multinode-749300 kubelet[1646]: E0203 12:27:43.853585    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:43 multinode-749300 kubelet[1646]: E0203 12:27:43.854245    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:45 multinode-749300 kubelet[1646]: E0203 12:27:45.853520    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:45 multinode-749300 kubelet[1646]: E0203 12:27:45.857915    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:47 multinode-749300 kubelet[1646]: E0203 12:27:47.853864    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:47 multinode-749300 kubelet[1646]: E0203 12:27:47.854661    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:49 multinode-749300 kubelet[1646]: E0203 12:27:49.854481    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:49 multinode-749300 kubelet[1646]: E0203 12:27:49.855863    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:51 multinode-749300 kubelet[1646]: E0203 12:27:51.853472    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:51 multinode-749300 kubelet[1646]: E0203 12:27:51.854452    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:53 multinode-749300 kubelet[1646]: E0203 12:27:53.859668    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:53 multinode-749300 kubelet[1646]: E0203 12:27:53.860055    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:55 multinode-749300 kubelet[1646]: E0203 12:27:55.853633    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:55 multinode-749300 kubelet[1646]: E0203 12:27:55.854320    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.494848    1646 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.494935    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume podName:c94a77a3-456e-41d7-b9ad-7aa97e0264a7 nodeName:}" failed. No retries permitted until 2025-02-03 12:28:29.494917969 +0000 UTC m=+69.933011442 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c94a77a3-456e-41d7-b9ad-7aa97e0264a7-config-volume") pod "coredns-668d6bf9bc-v2gkp" (UID: "c94a77a3-456e-41d7-b9ad-7aa97e0264a7") : object "kube-system"/"coredns" not registered
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.595875    1646 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.595906    1646 projected.go:194] Error preparing data for projected volume kube-api-access-m664r for pod default/busybox-58667487b6-zgvmd: object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.595961    1646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r podName:5d672e4b-d76f-474b-ab97-487b532b6140 nodeName:}" failed. No retries permitted until 2025-02-03 12:28:29.595942441 +0000 UTC m=+70.034036014 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-m664r" (UniqueName: "kubernetes.io/projected/5d672e4b-d76f-474b-ab97-487b532b6140-kube-api-access-m664r") pod "busybox-58667487b6-zgvmd" (UID: "5d672e4b-d76f-474b-ab97-487b532b6140") : object "default"/"kube-root-ca.crt" not registered
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.853654    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.642057   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.854513    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: I0203 12:27:57.906113    1646 scope.go:117] "RemoveContainer" containerID="a6484d4fc4d7f6ee26b1c4c1afc10f9bfba5b7f80f2181e9727f163daaf58ce6"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: I0203 12:27:57.907138    1646 scope.go:117] "RemoveContainer" containerID="edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 kubelet[1646]: E0203 12:27:57.910890    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(4c991afa-7bb0-4d52-bded-22d68037b5ae)\"" pod="kube-system/storage-provisioner" podUID="4c991afa-7bb0-4d52-bded-22d68037b5ae"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:27:59 multinode-749300 kubelet[1646]: E0203 12:27:59.855276    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:27:59 multinode-749300 kubelet[1646]: E0203 12:27:59.856164    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:01 multinode-749300 kubelet[1646]: E0203 12:28:01.853743    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:01 multinode-749300 kubelet[1646]: E0203 12:28:01.854049    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:03 multinode-749300 kubelet[1646]: E0203 12:28:03.853330    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:03 multinode-749300 kubelet[1646]: E0203 12:28:03.853968    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:05 multinode-749300 kubelet[1646]: E0203 12:28:05.853538    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:05 multinode-749300 kubelet[1646]: E0203 12:28:05.854181    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:07 multinode-749300 kubelet[1646]: E0203 12:28:07.853789    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:07 multinode-749300 kubelet[1646]: E0203 12:28:07.854093    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:09 multinode-749300 kubelet[1646]: E0203 12:28:09.860674    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-v2gkp" podUID="c94a77a3-456e-41d7-b9ad-7aa97e0264a7"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:09 multinode-749300 kubelet[1646]: E0203 12:28:09.861267    1646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-zgvmd" podUID="5d672e4b-d76f-474b-ab97-487b532b6140"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:10 multinode-749300 kubelet[1646]: I0203 12:28:10.015143    1646 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:10 multinode-749300 kubelet[1646]: I0203 12:28:10.852780    1646 scope.go:117] "RemoveContainer" containerID="edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]: I0203 12:28:19.875787    1646 scope.go:117] "RemoveContainer" containerID="ebc67da1b9e9ac10747758e3a934f19f5572ae8668d2a69f7d6ee1682387d02a"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]: E0203 12:28:19.883953    1646 iptables.go:577] "Could not set up iptables canary" err=<
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:19 multinode-749300 kubelet[1646]: I0203 12:28:19.923723    1646 scope.go:117] "RemoveContainer" containerID="e3efb81aa459abda7cc19b8607aa9d2bc56a837cc325e672683ffa4a9d05876b"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 kubelet[1646]: I0203 12:28:30.439871    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d290c79ddbf8dbaaae0ac6ae29ff1695c351eb244341bb86dfa66bd51e407af5"
	I0203 12:28:40.643055   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 kubelet[1646]: I0203 12:28:30.451444    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac5f0bf5197cf2f2f9c600a6d9f77ea7775ba4c80a3a3c30272ea8dc42d9f4e2"
	I0203 12:28:40.690041   13136 logs.go:123] Gathering logs for describe nodes ...
	I0203 12:28:40.690041   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0203 12:28:40.882462   13136 command_runner.go:130] > Name:               multinode-749300
	I0203 12:28:40.882512   13136 command_runner.go:130] > Roles:              control-plane
	I0203 12:28:40.882512   13136 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0203 12:28:40.882512   13136 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0203 12:28:40.882567   13136 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0203 12:28:40.882567   13136 command_runner.go:130] >                     kubernetes.io/hostname=multinode-749300
	I0203 12:28:40.882636   13136 command_runner.go:130] >                     kubernetes.io/os=linux
	I0203 12:28:40.882666   13136 command_runner.go:130] >                     minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	I0203 12:28:40.882689   13136 command_runner.go:130] >                     minikube.k8s.io/name=multinode-749300
	I0203 12:28:40.882743   13136 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0203 12:28:40.882767   13136 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_03T12_04_56_0700
	I0203 12:28:40.882806   13136 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0203 12:28:40.882806   13136 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0203 12:28:40.882861   13136 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0203 12:28:40.882861   13136 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0203 12:28:40.882917   13136 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0203 12:28:40.882917   13136 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0203 12:28:40.882917   13136 command_runner.go:130] > CreationTimestamp:  Mon, 03 Feb 2025 12:04:52 +0000
	I0203 12:28:40.882980   13136 command_runner.go:130] > Taints:             <none>
	I0203 12:28:40.882980   13136 command_runner.go:130] > Unschedulable:      false
	I0203 12:28:40.882980   13136 command_runner.go:130] > Lease:
	I0203 12:28:40.882980   13136 command_runner.go:130] >   HolderIdentity:  multinode-749300
	I0203 12:28:40.883046   13136 command_runner.go:130] >   AcquireTime:     <unset>
	I0203 12:28:40.883046   13136 command_runner.go:130] >   RenewTime:       Mon, 03 Feb 2025 12:28:35 +0000
	I0203 12:28:40.883046   13136 command_runner.go:130] > Conditions:
	I0203 12:28:40.883118   13136 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0203 12:28:40.883118   13136 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0203 12:28:40.883174   13136 command_runner.go:130] >   MemoryPressure   False   Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0203 12:28:40.883174   13136 command_runner.go:130] >   DiskPressure     False   Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0203 12:28:40.883233   13136 command_runner.go:130] >   PIDPressure      False   Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0203 12:28:40.883233   13136 command_runner.go:130] >   Ready            True    Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:28:10 +0000   KubeletReady                 kubelet is posting ready status
	I0203 12:28:40.883289   13136 command_runner.go:130] > Addresses:
	I0203 12:28:40.883289   13136 command_runner.go:130] >   InternalIP:  172.25.12.244
	I0203 12:28:40.883289   13136 command_runner.go:130] >   Hostname:    multinode-749300
	I0203 12:28:40.883361   13136 command_runner.go:130] > Capacity:
	I0203 12:28:40.883361   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:40.883418   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:40.883418   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:40.883418   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:40.883418   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:40.883480   13136 command_runner.go:130] > Allocatable:
	I0203 12:28:40.883480   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:40.883480   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:40.883536   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:40.883536   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:40.883536   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:40.883536   13136 command_runner.go:130] > System Info:
	I0203 12:28:40.883536   13136 command_runner.go:130] >   Machine ID:                 aa9fbed762e844a2902d570b7040a1f0
	I0203 12:28:40.883536   13136 command_runner.go:130] >   System UUID:                69ffc0f0-a1d7-9e4e-97f3-ed54041f4203
	I0203 12:28:40.883617   13136 command_runner.go:130] >   Boot ID:                    d8bb3b39-ca1e-4113-9882-57d63502f9b2
	I0203 12:28:40.883617   13136 command_runner.go:130] >   Kernel Version:             5.10.207
	I0203 12:28:40.883676   13136 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0203 12:28:40.883676   13136 command_runner.go:130] >   Operating System:           linux
	I0203 12:28:40.883676   13136 command_runner.go:130] >   Architecture:               amd64
	I0203 12:28:40.883676   13136 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0203 12:28:40.883738   13136 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0203 12:28:40.883738   13136 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0203 12:28:40.883795   13136 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0203 12:28:40.883795   13136 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0203 12:28:40.883795   13136 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0203 12:28:40.883866   13136 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0203 12:28:40.883866   13136 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0203 12:28:40.883905   13136 command_runner.go:130] >   default                     busybox-58667487b6-zgvmd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0203 12:28:40.883905   13136 command_runner.go:130] >   kube-system                 coredns-668d6bf9bc-v2gkp                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0203 12:28:40.884002   13136 command_runner.go:130] >   kube-system                 etcd-multinode-749300                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         75s
	I0203 12:28:40.884002   13136 command_runner.go:130] >   kube-system                 kindnet-h6m57                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0203 12:28:40.884074   13136 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-749300             250m (12%)    0 (0%)      0 (0%)           0 (0%)         75s
	I0203 12:28:40.884132   13136 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-749300    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:40.884132   13136 command_runner.go:130] >   kube-system                 kube-proxy-9g92t                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:40.884132   13136 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-749300             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:40.884235   13136 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0203 12:28:40.884235   13136 command_runner.go:130] > Allocated resources:
	I0203 12:28:40.884235   13136 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0203 12:28:40.884307   13136 command_runner.go:130] >   Resource           Requests     Limits
	I0203 12:28:40.884307   13136 command_runner.go:130] >   --------           --------     ------
	I0203 12:28:40.884362   13136 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0203 12:28:40.884362   13136 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0203 12:28:40.884362   13136 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0203 12:28:40.884362   13136 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0203 12:28:40.884362   13136 command_runner.go:130] > Events:
	I0203 12:28:40.884464   13136 command_runner.go:130] >   Type     Reason                   Age                From             Message
	I0203 12:28:40.884464   13136 command_runner.go:130] >   ----     ------                   ----               ----             -------
	I0203 12:28:40.884464   13136 command_runner.go:130] >   Normal   Starting                 23m                kube-proxy       
	I0203 12:28:40.884535   13136 command_runner.go:130] >   Normal   Starting                 72s                kube-proxy       
	I0203 12:28:40.884590   13136 command_runner.go:130] >   Normal   Starting                 23m                kubelet          Starting kubelet.
	I0203 12:28:40.884590   13136 command_runner.go:130] >   Normal   NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	I0203 12:28:40.884590   13136 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	I0203 12:28:40.884692   13136 command_runner.go:130] >   Normal   NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	I0203 12:28:40.884692   13136 command_runner.go:130] >   Normal   NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:40.884692   13136 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    23m                kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   NodeHasSufficientMemory  23m                kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   NodeHasSufficientPID     23m                kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   Starting                 23m                kubelet          Starting kubelet.
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   RegisteredNode           23m                node-controller  Node multinode-749300 event: Registered Node multinode-749300 in Controller
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   NodeReady                23m                kubelet          Node multinode-749300 status is now: NodeReady
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   Starting                 81s                kubelet          Starting kubelet.
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   NodeHasSufficientMemory  81s (x8 over 81s)  kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    81s (x8 over 81s)  kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   NodeHasSufficientPID     81s (x7 over 81s)  kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   NodeAllocatableEnforced  81s                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Warning  Rebooted                 75s                kubelet          Node multinode-749300 has been rebooted, boot id: d8bb3b39-ca1e-4113-9882-57d63502f9b2
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Normal   RegisteredNode           72s                node-controller  Node multinode-749300 event: Registered Node multinode-749300 in Controller
	I0203 12:28:40.884763   13136 command_runner.go:130] > Name:               multinode-749300-m02
	I0203 12:28:40.884763   13136 command_runner.go:130] > Roles:              <none>
	I0203 12:28:40.884763   13136 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     kubernetes.io/hostname=multinode-749300-m02
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     kubernetes.io/os=linux
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     minikube.k8s.io/name=multinode-749300
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_03T12_07_57_0700
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0203 12:28:40.884763   13136 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0203 12:28:40.884763   13136 command_runner.go:130] > CreationTimestamp:  Mon, 03 Feb 2025 12:07:57 +0000
	I0203 12:28:40.884763   13136 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0203 12:28:40.884763   13136 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0203 12:28:40.884763   13136 command_runner.go:130] > Unschedulable:      false
	I0203 12:28:40.884763   13136 command_runner.go:130] > Lease:
	I0203 12:28:40.884763   13136 command_runner.go:130] >   HolderIdentity:  multinode-749300-m02
	I0203 12:28:40.884763   13136 command_runner.go:130] >   AcquireTime:     <unset>
	I0203 12:28:40.884763   13136 command_runner.go:130] >   RenewTime:       Mon, 03 Feb 2025 12:24:25 +0000
	I0203 12:28:40.884763   13136 command_runner.go:130] > Conditions:
	I0203 12:28:40.884763   13136 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0203 12:28:40.884763   13136 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0203 12:28:40.884763   13136 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:40.884763   13136 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:40.885304   13136 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:40.885304   13136 command_runner.go:130] >   Ready            Unknown   Mon, 03 Feb 2025 12:23:15 +0000   Mon, 03 Feb 2025 12:28:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:40.885304   13136 command_runner.go:130] > Addresses:
	I0203 12:28:40.885304   13136 command_runner.go:130] >   InternalIP:  172.25.8.35
	I0203 12:28:40.885419   13136 command_runner.go:130] >   Hostname:    multinode-749300-m02
	I0203 12:28:40.885419   13136 command_runner.go:130] > Capacity:
	I0203 12:28:40.885419   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:40.885491   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:40.885491   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:40.885530   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:40.885530   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:40.885530   13136 command_runner.go:130] > Allocatable:
	I0203 12:28:40.885530   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:40.885623   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:40.885623   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:40.885623   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:40.885623   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:40.885623   13136 command_runner.go:130] > System Info:
	I0203 12:28:40.885695   13136 command_runner.go:130] >   Machine ID:                 90c62936ba5d4d0aaeb17fe1abbb7ffd
	I0203 12:28:40.885750   13136 command_runner.go:130] >   System UUID:                4e05b2a5-08ff-3741-b04f-b8bc068a3e3b
	I0203 12:28:40.885750   13136 command_runner.go:130] >   Boot ID:                    4aec9dc0-92f8-4c4d-b16a-206948ca045d
	I0203 12:28:40.885750   13136 command_runner.go:130] >   Kernel Version:             5.10.207
	I0203 12:28:40.885750   13136 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0203 12:28:40.885854   13136 command_runner.go:130] >   Operating System:           linux
	I0203 12:28:40.885854   13136 command_runner.go:130] >   Architecture:               amd64
	I0203 12:28:40.885854   13136 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0203 12:28:40.885929   13136 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0203 12:28:40.885929   13136 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0203 12:28:40.885929   13136 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0203 12:28:40.885986   13136 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0203 12:28:40.885986   13136 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0203 12:28:40.885986   13136 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0203 12:28:40.885986   13136 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0203 12:28:40.886095   13136 command_runner.go:130] >   default                     busybox-58667487b6-c66bf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0203 12:28:40.886095   13136 command_runner.go:130] >   kube-system                 kindnet-dc9wq               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0203 12:28:40.886166   13136 command_runner.go:130] >   kube-system                 kube-proxy-ggnq7            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0203 12:28:40.886166   13136 command_runner.go:130] > Allocated resources:
	I0203 12:28:40.886221   13136 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0203 12:28:40.886221   13136 command_runner.go:130] >   Resource           Requests   Limits
	I0203 12:28:40.886221   13136 command_runner.go:130] >   --------           --------   ------
	I0203 12:28:40.886221   13136 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0203 12:28:40.886323   13136 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0203 12:28:40.886323   13136 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0203 12:28:40.886323   13136 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0203 12:28:40.886323   13136 command_runner.go:130] > Events:
	I0203 12:28:40.886394   13136 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0203 12:28:40.886394   13136 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0203 12:28:40.886449   13136 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0203 12:28:40.886449   13136 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-749300-m02 status is now: NodeHasSufficientMemory
	I0203 12:28:40.886449   13136 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-749300-m02 status is now: NodeHasNoDiskPressure
	I0203 12:28:40.886569   13136 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-749300-m02 status is now: NodeHasSufficientPID
	I0203 12:28:40.886569   13136 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:40.886569   13136 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-749300-m02 event: Registered Node multinode-749300-m02 in Controller
	I0203 12:28:40.886640   13136 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-749300-m02 status is now: NodeReady
	I0203 12:28:40.886696   13136 command_runner.go:130] >   Normal  RegisteredNode           72s                node-controller  Node multinode-749300-m02 event: Registered Node multinode-749300-m02 in Controller
	I0203 12:28:40.886696   13136 command_runner.go:130] >   Normal  NodeNotReady             22s                node-controller  Node multinode-749300-m02 status is now: NodeNotReady
	I0203 12:28:40.886696   13136 command_runner.go:130] > Name:               multinode-749300-m03
	I0203 12:28:40.886696   13136 command_runner.go:130] > Roles:              <none>
	I0203 12:28:40.886696   13136 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0203 12:28:40.886800   13136 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0203 12:28:40.886800   13136 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0203 12:28:40.886800   13136 command_runner.go:130] >                     kubernetes.io/hostname=multinode-749300-m03
	I0203 12:28:40.886874   13136 command_runner.go:130] >                     kubernetes.io/os=linux
	I0203 12:28:40.886874   13136 command_runner.go:130] >                     minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	I0203 12:28:40.886932   13136 command_runner.go:130] >                     minikube.k8s.io/name=multinode-749300
	I0203 12:28:40.886932   13136 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0203 12:28:40.886932   13136 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_03T12_22_58_0700
	I0203 12:28:40.886932   13136 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0203 12:28:40.886932   13136 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0203 12:28:40.887034   13136 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0203 12:28:40.887034   13136 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0203 12:28:40.887034   13136 command_runner.go:130] > CreationTimestamp:  Mon, 03 Feb 2025 12:22:58 +0000
	I0203 12:28:40.887105   13136 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0203 12:28:40.887160   13136 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0203 12:28:40.887160   13136 command_runner.go:130] > Unschedulable:      false
	I0203 12:28:40.887160   13136 command_runner.go:130] > Lease:
	I0203 12:28:40.887160   13136 command_runner.go:130] >   HolderIdentity:  multinode-749300-m03
	I0203 12:28:40.887160   13136 command_runner.go:130] >   AcquireTime:     <unset>
	I0203 12:28:40.887160   13136 command_runner.go:130] >   RenewTime:       Mon, 03 Feb 2025 12:23:59 +0000
	I0203 12:28:40.887261   13136 command_runner.go:130] > Conditions:
	I0203 12:28:40.887261   13136 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0203 12:28:40.887333   13136 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0203 12:28:40.887388   13136 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:40.887388   13136 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:40.887388   13136 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:40.887492   13136 command_runner.go:130] >   Ready            Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0203 12:28:40.887492   13136 command_runner.go:130] > Addresses:
	I0203 12:28:40.887492   13136 command_runner.go:130] >   InternalIP:  172.25.0.54
	I0203 12:28:40.887492   13136 command_runner.go:130] >   Hostname:    multinode-749300-m03
	I0203 12:28:40.887597   13136 command_runner.go:130] > Capacity:
	I0203 12:28:40.887597   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:40.887597   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:40.887597   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:40.887597   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:40.887597   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:40.887697   13136 command_runner.go:130] > Allocatable:
	I0203 12:28:40.887697   13136 command_runner.go:130] >   cpu:                2
	I0203 12:28:40.887697   13136 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0203 12:28:40.887697   13136 command_runner.go:130] >   hugepages-2Mi:      0
	I0203 12:28:40.887697   13136 command_runner.go:130] >   memory:             2164264Ki
	I0203 12:28:40.887769   13136 command_runner.go:130] >   pods:               110
	I0203 12:28:40.887809   13136 command_runner.go:130] > System Info:
	I0203 12:28:40.887809   13136 command_runner.go:130] >   Machine ID:                 38d40ad4379a4ec5b47dd7ccdbdcfdd3
	I0203 12:28:40.887809   13136 command_runner.go:130] >   System UUID:                605d710b-5b92-ec4e-8d85-0f6c10e8d37a
	I0203 12:28:40.887809   13136 command_runner.go:130] >   Boot ID:                    13f88b1f-ea06-4747-bc4f-774ad0edb09f
	I0203 12:28:40.887896   13136 command_runner.go:130] >   Kernel Version:             5.10.207
	I0203 12:28:40.887896   13136 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0203 12:28:40.887896   13136 command_runner.go:130] >   Operating System:           linux
	I0203 12:28:40.887896   13136 command_runner.go:130] >   Architecture:               amd64
	I0203 12:28:40.887968   13136 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0203 12:28:40.887968   13136 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0203 12:28:40.888026   13136 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0203 12:28:40.888026   13136 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0203 12:28:40.888026   13136 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0203 12:28:40.888026   13136 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0203 12:28:40.888133   13136 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0203 12:28:40.888133   13136 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0203 12:28:40.888133   13136 command_runner.go:130] >   kube-system                 kindnet-bckxx       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0203 12:28:40.888264   13136 command_runner.go:130] >   kube-system                 kube-proxy-w8wrd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0203 12:28:40.888264   13136 command_runner.go:130] > Allocated resources:
	I0203 12:28:40.888264   13136 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0203 12:28:40.888264   13136 command_runner.go:130] >   Resource           Requests   Limits
	I0203 12:28:40.888365   13136 command_runner.go:130] >   --------           --------   ------
	I0203 12:28:40.888365   13136 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0203 12:28:40.888365   13136 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0203 12:28:40.888365   13136 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0203 12:28:40.888438   13136 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0203 12:28:40.888438   13136 command_runner.go:130] > Events:
	I0203 12:28:40.888476   13136 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0203 12:28:40.888476   13136 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0203 12:28:40.888476   13136 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0203 12:28:40.888563   13136 command_runner.go:130] >   Normal  Starting                 5m39s                  kube-proxy       
	I0203 12:28:40.888563   13136 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientMemory
	I0203 12:28:40.888563   13136 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientPID
	I0203 12:28:40.888664   13136 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:40.888750   13136 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-749300-m03 status is now: NodeHasNoDiskPressure
	I0203 12:28:40.888750   13136 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-749300-m03 status is now: NodeReady
	I0203 12:28:40.888803   13136 command_runner.go:130] >   Normal  CIDRAssignmentFailed     5m42s                  cidrAllocator    Node multinode-749300-m03 status is now: CIDRAssignmentFailed
	I0203 12:28:40.888866   13136 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m42s (x2 over 5m42s)  kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientMemory
	I0203 12:28:40.888900   13136 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m42s (x2 over 5m42s)  kubelet          Node multinode-749300-m03 status is now: NodeHasNoDiskPressure
	I0203 12:28:40.888955   13136 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m42s (x2 over 5m42s)  kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientPID
	I0203 12:28:40.888955   13136 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m42s                  kubelet          Updated Node Allocatable limit across pods
	I0203 12:28:40.889006   13136 command_runner.go:130] >   Normal  RegisteredNode           5m41s                  node-controller  Node multinode-749300-m03 event: Registered Node multinode-749300-m03 in Controller
	I0203 12:28:40.889085   13136 command_runner.go:130] >   Normal  NodeReady                5m27s                  kubelet          Node multinode-749300-m03 status is now: NodeReady
	I0203 12:28:40.889125   13136 command_runner.go:130] >   Normal  NodeNotReady             3m50s                  node-controller  Node multinode-749300-m03 status is now: NodeNotReady
	I0203 12:28:40.889125   13136 command_runner.go:130] >   Normal  RegisteredNode           72s                    node-controller  Node multinode-749300-m03 event: Registered Node multinode-749300-m03 in Controller
	I0203 12:28:40.899700   13136 logs.go:123] Gathering logs for kube-proxy [c6dc514e98f6] ...
	I0203 12:28:40.899700   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6dc514e98f6"
	I0203 12:28:40.931168   13136 command_runner.go:130] ! I0203 12:05:01.746820       1 server_linux.go:66] "Using iptables proxy"
	I0203 12:28:40.931168   13136 command_runner.go:130] ! E0203 12:05:01.780088       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:40.931656   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0203 12:28:40.931656   13136 command_runner.go:130] ! 	add table ip kube-proxy
	I0203 12:28:40.931656   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:40.931695   13136 command_runner.go:130] !  >
	I0203 12:28:40.931695   13136 command_runner.go:130] ! E0203 12:05:01.805329       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:40.931732   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0203 12:28:40.931767   13136 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0203 12:28:40.931767   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:40.931767   13136 command_runner.go:130] !  >
	I0203 12:28:40.931767   13136 command_runner.go:130] ! I0203 12:05:01.822582       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.1.53"]
	I0203 12:28:40.931823   13136 command_runner.go:130] ! E0203 12:05:01.822737       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0203 12:28:40.931823   13136 command_runner.go:130] ! I0203 12:05:01.878001       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0203 12:28:40.931823   13136 command_runner.go:130] ! I0203 12:05:01.878049       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0203 12:28:40.931901   13136 command_runner.go:130] ! I0203 12:05:01.878079       1 server_linux.go:170] "Using iptables Proxier"
	I0203 12:28:40.931901   13136 command_runner.go:130] ! I0203 12:05:01.883741       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0203 12:28:40.931901   13136 command_runner.go:130] ! I0203 12:05:01.884139       1 server.go:497] "Version info" version="v1.32.1"
	I0203 12:28:40.931973   13136 command_runner.go:130] ! I0203 12:05:01.884172       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:40.931973   13136 command_runner.go:130] ! I0203 12:05:01.886194       1 config.go:199] "Starting service config controller"
	I0203 12:28:40.931973   13136 command_runner.go:130] ! I0203 12:05:01.886246       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0203 12:28:40.931973   13136 command_runner.go:130] ! I0203 12:05:01.886272       1 config.go:105] "Starting endpoint slice config controller"
	I0203 12:28:40.932038   13136 command_runner.go:130] ! I0203 12:05:01.886277       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0203 12:28:40.932038   13136 command_runner.go:130] ! I0203 12:05:01.886976       1 config.go:329] "Starting node config controller"
	I0203 12:28:40.932105   13136 command_runner.go:130] ! I0203 12:05:01.887004       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0203 12:28:40.932105   13136 command_runner.go:130] ! I0203 12:05:01.987328       1 shared_informer.go:320] Caches are synced for node config
	I0203 12:28:40.932105   13136 command_runner.go:130] ! I0203 12:05:01.987379       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0203 12:28:40.932105   13136 command_runner.go:130] ! I0203 12:05:01.987536       1 shared_informer.go:320] Caches are synced for service config
	I0203 12:28:40.934191   13136 logs.go:123] Gathering logs for kindnet [fab2d9be6b5c] ...
	I0203 12:28:40.935204   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fab2d9be6b5c"
	I0203 12:28:40.965370   13136 command_runner.go:130] ! I0203 12:13:59.481747       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.965370   13136 command_runner.go:130] ! I0203 12:13:59.482211       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.965370   13136 command_runner.go:130] ! I0203 12:13:59.482302       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.965848   13136 command_runner.go:130] ! I0203 12:14:09.479387       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.965848   13136 command_runner.go:130] ! I0203 12:14:09.479438       1 main.go:301] handling current node
	I0203 12:28:40.965848   13136 command_runner.go:130] ! I0203 12:14:09.479457       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.966043   13136 command_runner.go:130] ! I0203 12:14:09.479464       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.966101   13136 command_runner.go:130] ! I0203 12:14:09.480145       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.966138   13136 command_runner.go:130] ! I0203 12:14:09.480233       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.966226   13136 command_runner.go:130] ! I0203 12:14:19.488038       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.966707   13136 command_runner.go:130] ! I0203 12:14:19.488073       1 main.go:301] handling current node
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:19.488090       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:19.488096       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:19.488279       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:19.488286       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:29.479983       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:29.480097       1 main.go:301] handling current node
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:29.480118       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:29.480126       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:29.480690       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:29.480801       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:39.480046       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:39.480207       1 main.go:301] handling current node
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:39.480229       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:39.480240       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:39.480703       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:39.480794       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:49.479153       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:49.479261       1 main.go:301] handling current node
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:49.479283       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:49.479292       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:49.479491       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:49.479575       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:59.478982       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:59.479132       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:59.479435       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:59.479519       1 main.go:301] handling current node
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:59.479535       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:14:59.479541       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:15:09.479541       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:15:09.479593       1 main.go:301] handling current node
	I0203 12:28:40.970587   13136 command_runner.go:130] ! I0203 12:15:09.479613       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:09.479621       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:09.480303       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:09.480382       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:19.488389       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:19.488489       1 main.go:301] handling current node
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:19.488509       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:19.488517       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:19.489046       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:19.489142       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:29.481025       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:29.481131       1 main.go:301] handling current node
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:29.481151       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:29.481158       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:29.481350       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:29.481373       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:39.487726       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:39.487893       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:39.488092       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:39.488105       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:39.488232       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:39.488259       1 main.go:301] handling current node
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:49.484117       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:49.484177       1 main.go:301] handling current node
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:49.484234       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:49.484314       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:49.485204       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:49.485392       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:59.481092       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:59.481195       1 main.go:301] handling current node
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:59.481218       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:59.481226       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:59.481484       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:15:59.481510       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:16:09.480009       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:16:09.480236       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:16:09.480645       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:16:09.480840       1 main.go:301] handling current node
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:16:09.480969       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971243   13136 command_runner.go:130] ! I0203 12:16:09.481255       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:19.479435       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:19.479557       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:19.479760       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:19.479977       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:19.480328       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:19.480522       1 main.go:301] handling current node
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:29.479113       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:29.479221       1 main.go:301] handling current node
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:29.479267       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:29.479321       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:29.479572       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:29.479670       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:39.484562       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:39.484671       1 main.go:301] handling current node
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:39.484693       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:39.484700       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:39.485166       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:39.485259       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:49.488261       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:49.488416       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:49.488709       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:49.488783       1 main.go:301] handling current node
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:49.488801       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:49.488807       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:59.479138       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:59.479218       1 main.go:301] handling current node
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:59.479312       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:59.479448       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:59.480031       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:16:59.480132       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:09.479412       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:09.479454       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:09.479652       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:09.479680       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:09.479774       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:09.479785       1 main.go:301] handling current node
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:19.481248       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:19.481299       1 main.go:301] handling current node
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:19.481317       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:19.481324       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:19.481727       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:19.481754       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.971869   13136 command_runner.go:130] ! I0203 12:17:29.479244       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972492   13136 command_runner.go:130] ! I0203 12:17:29.479364       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:29.479384       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:29.479392       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:29.480340       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:29.480488       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:39.486004       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:39.486109       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:39.486129       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:39.486137       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:39.487056       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:39.487145       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:49.479174       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:49.479407       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:49.479529       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:49.479564       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:49.480448       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:49.480489       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:59.479178       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:59.479464       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:59.479683       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:59.479843       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:59.479900       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:17:59.479909       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:09.479760       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:09.479855       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:09.480291       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:09.480340       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:09.480365       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:09.480374       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:19.487177       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:19.487393       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:19.487478       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:19.487578       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:19.488002       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:19.488201       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:29.479665       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:29.479790       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:29.480229       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:29.480333       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:29.480694       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:29.480800       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:39.478894       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:39.479048       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:39.479069       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:39.479077       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:39.479735       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:39.479846       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:49.487084       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:49.487121       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:49.487139       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:49.487146       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:49.487825       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:49.488251       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:59.479844       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:59.479986       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:59.480763       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:59.480852       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:59.480911       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:18:59.480921       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:09.479931       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:09.480043       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:09.480242       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:09.480487       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:09.480506       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:09.480516       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:19.486529       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:19.486564       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:19.486583       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:19.486590       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:19.486994       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:19.487009       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:29.480898       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:29.481006       1 main.go:301] handling current node
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:29.481028       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:29.481037       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:29.481233       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:29.481256       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.972531   13136 command_runner.go:130] ! I0203 12:19:39.486219       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:39.486253       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:39.486535       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:39.486547       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:39.486661       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:39.486668       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:49.486894       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:49.487004       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:49.487855       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:49.488255       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:49.488415       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:49.488578       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:59.480029       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:59.480068       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:59.480087       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:59.480095       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:59.480976       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:19:59.481279       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:09.480108       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:09.480217       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:09.480237       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:09.480245       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:09.480661       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:09.480744       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:19.479758       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:19.480248       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:19.480343       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:19.480356       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:19.480786       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:19.480803       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:29.479490       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:29.479617       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:29.480064       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:29.480169       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:29.480353       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:29.480368       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:39.479641       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:39.479836       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:39.479918       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:39.480224       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:39.480721       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:39.480751       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:49.479128       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:49.479242       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:49.479263       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:49.479271       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:49.479687       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:49.479937       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:59.485967       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:59.486008       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:59.486029       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:59.486037       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:59.486327       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:20:59.486342       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:09.479406       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:09.479537       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:09.479560       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:09.479571       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:09.480561       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:09.480668       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:19.486059       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:19.486172       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:19.486192       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:19.486199       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:19.486776       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:19.486913       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:29.479291       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:29.479421       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:29.480168       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:29.480268       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:29.480621       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:29.480720       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:39.479561       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:39.479684       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:39.480019       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:39.480130       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:39.480149       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:39.480157       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:49.485937       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:49.486015       1 main.go:301] handling current node
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:49.486511       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:49.486846       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:49.487441       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:49.487470       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.973486   13136 command_runner.go:130] ! I0203 12:21:59.479224       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:21:59.479388       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:21:59.479615       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:21:59.479639       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:21:59.479828       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:21:59.479942       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:09.479352       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:09.479745       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:09.480390       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:09.480426       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:09.480922       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:09.481129       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:19.480040       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:19.480088       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:19.480938       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:19.480972       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:19.481966       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:19.482194       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:29.479113       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:29.479222       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:29.479243       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:29.479251       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:29.479605       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:29.479637       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:39.488770       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:39.488806       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:39.488823       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:39.488830       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:39.489296       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:39.489449       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:49.479056       1 main.go:297] Handling node with IPs: map[172.25.13.163:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:49.479097       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.2.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:49.479550       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:49.479661       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:49.479679       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:49.479687       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:59.478931       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:59.479023       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:59.479077       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:59.479136       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:59.479510       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:59.479604       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:22:59.479991       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.0.54 Flags: [] Table: 0 Realm: 0} 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:09.479836       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:09.479965       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:09.479985       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:09.479997       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:09.480363       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:09.480514       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:19.480167       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:19.480217       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:19.480239       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:19.480245       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:19.480628       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:19.480750       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:29.488733       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:29.489234       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:29.489474       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:29.489946       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:29.490535       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:29.490635       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:39.479240       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:39.479359       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:39.479382       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:39.479391       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:39.479635       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:39.479662       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:49.484665       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:49.484760       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:49.484814       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:49.484827       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:49.485522       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:49.485609       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:59.488178       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:59.488328       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:59.488725       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:59.488825       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:59.489199       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:23:59.489288       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:24:09.478924       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:24:09.478990       1 main.go:301] handling current node
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:24:09.479043       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:24:09.479072       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:24:09.479342       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.974486   13136 command_runner.go:130] ! I0203 12:24:09.479511       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:19.485161       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:19.485331       1 main.go:301] handling current node
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:19.485367       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:19.485388       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:19.486434       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:19.486547       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:29.479544       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:29.480058       1 main.go:301] handling current node
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:29.480294       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:29.480571       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:29.482395       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:29.482495       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:39.487057       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:39.487164       1 main.go:301] handling current node
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:39.487184       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:39.487192       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:39.487371       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:39.487395       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:49.479049       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:49.479126       1 main.go:301] handling current node
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:49.479266       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:49.479354       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:49.480131       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:49.480242       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:59.479305       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:59.479727       1 main.go:301] handling current node
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:59.479826       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:59.479839       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:59.480314       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:40.975485   13136 command_runner.go:130] ! I0203 12:24:59.480509       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:40.994317   13136 logs.go:123] Gathering logs for dmesg ...
	I0203 12:28:40.994317   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 12:28:41.018064   13136 command_runner.go:130] > [Feb 3 12:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0203 12:28:41.018064   13136 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0203 12:28:41.018064   13136 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0203 12:28:41.018064   13136 command_runner.go:130] > [  +0.106774] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0203 12:28:41.018064   13136 command_runner.go:130] > [  +0.023238] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0203 12:28:41.018249   13136 command_runner.go:130] > [  +0.000004] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0203 12:28:41.018334   13136 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.060292] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.024825] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0203 12:28:41.018469   13136 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +6.580601] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +1.325226] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +1.308770] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [Feb 3 12:26] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0203 12:28:41.018469   13136 command_runner.go:130] > [ +44.595913] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.095070] kauditd_printk_skb: 4 callbacks suppressed
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.080250] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [Feb 3 12:27] systemd-fstab-generator[1026]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.111210] kauditd_printk_skb: 75 callbacks suppressed
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.499536] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.200113] systemd-fstab-generator[1078]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.221690] systemd-fstab-generator[1092]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +2.970290] systemd-fstab-generator[1331]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.201836] systemd-fstab-generator[1343]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.192903] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.251653] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.851149] systemd-fstab-generator[1495]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +0.100990] kauditd_printk_skb: 206 callbacks suppressed
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +3.722313] systemd-fstab-generator[1639]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +1.365001] kauditd_printk_skb: 44 callbacks suppressed
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +5.747815] kauditd_printk_skb: 30 callbacks suppressed
	I0203 12:28:41.018469   13136 command_runner.go:130] > [  +3.773287] systemd-fstab-generator[2531]: Ignoring "noauto" option for root device
	I0203 12:28:41.018469   13136 command_runner.go:130] > [ +27.270277] kauditd_printk_skb: 70 callbacks suppressed
	I0203 12:28:41.020436   13136 logs.go:123] Gathering logs for kube-apiserver [6c19e0a0ba9c] ...
	I0203 12:28:41.020436   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c19e0a0ba9c"
	I0203 12:28:41.048146   13136 command_runner.go:130] ! W0203 12:27:22.209566       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0203 12:28:41.048320   13136 command_runner.go:130] ! I0203 12:27:22.212385       1 options.go:238] external host was not specified, using 172.25.12.244
	I0203 12:28:41.048320   13136 command_runner.go:130] ! I0203 12:27:22.215411       1 server.go:143] Version: v1.32.1
	I0203 12:28:41.048320   13136 command_runner.go:130] ! I0203 12:27:22.215519       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:41.048320   13136 command_runner.go:130] ! I0203 12:27:22.961695       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0203 12:28:41.048391   13136 command_runner.go:130] ! I0203 12:27:22.981400       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0203 12:28:41.048435   13136 command_runner.go:130] ! I0203 12:27:22.991076       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0203 12:28:41.048513   13136 command_runner.go:130] ! I0203 12:27:22.991179       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0203 12:28:41.048513   13136 command_runner.go:130] ! I0203 12:27:22.995374       1 instance.go:233] Using reconciler: lease
	I0203 12:28:41.048551   13136 command_runner.go:130] ! I0203 12:27:23.455051       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0203 12:28:41.048576   13136 command_runner.go:130] ! W0203 12:27:23.455431       1 genericapiserver.go:767] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.048625   13136 command_runner.go:130] ! I0203 12:27:23.772863       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0203 12:28:41.048625   13136 command_runner.go:130] ! I0203 12:27:23.773118       1 apis.go:106] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0203 12:28:41.048671   13136 command_runner.go:130] ! I0203 12:27:24.011206       1 apis.go:106] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0203 12:28:41.048671   13136 command_runner.go:130] ! I0203 12:27:24.156938       1 apis.go:106] API group "resource.k8s.io" is not enabled, skipping.
	I0203 12:28:41.048720   13136 command_runner.go:130] ! I0203 12:27:24.167831       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0203 12:28:41.048720   13136 command_runner.go:130] ! W0203 12:27:24.167952       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.048767   13136 command_runner.go:130] ! W0203 12:27:24.167965       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:41.048767   13136 command_runner.go:130] ! I0203 12:27:24.168630       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0203 12:28:41.048767   13136 command_runner.go:130] ! W0203 12:27:24.168731       1 genericapiserver.go:767] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.048816   13136 command_runner.go:130] ! I0203 12:27:24.169810       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0203 12:28:41.048816   13136 command_runner.go:130] ! I0203 12:27:24.170800       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0203 12:28:41.048862   13136 command_runner.go:130] ! W0203 12:27:24.170918       1 genericapiserver.go:767] Skipping API autoscaling/v2beta1 because it has no resources.
	I0203 12:28:41.048862   13136 command_runner.go:130] ! W0203 12:27:24.170928       1 genericapiserver.go:767] Skipping API autoscaling/v2beta2 because it has no resources.
	I0203 12:28:41.048910   13136 command_runner.go:130] ! I0203 12:27:24.172706       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0203 12:28:41.048910   13136 command_runner.go:130] ! W0203 12:27:24.172818       1 genericapiserver.go:767] Skipping API batch/v1beta1 because it has no resources.
	I0203 12:28:41.048956   13136 command_runner.go:130] ! I0203 12:27:24.173842       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0203 12:28:41.048956   13136 command_runner.go:130] ! W0203 12:27:24.173955       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.048956   13136 command_runner.go:130] ! W0203 12:27:24.173976       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:41.049006   13136 command_runner.go:130] ! I0203 12:27:24.174699       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0203 12:28:41.049006   13136 command_runner.go:130] ! W0203 12:27:24.174807       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.049052   13136 command_runner.go:130] ! W0203 12:27:24.174815       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1alpha2 because it has no resources.
	I0203 12:28:41.049052   13136 command_runner.go:130] ! I0203 12:27:24.175562       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0203 12:28:41.049100   13136 command_runner.go:130] ! W0203 12:27:24.175675       1 genericapiserver.go:767] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.049100   13136 command_runner.go:130] ! I0203 12:27:24.177712       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0203 12:28:41.049146   13136 command_runner.go:130] ! W0203 12:27:24.177817       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.049146   13136 command_runner.go:130] ! W0203 12:27:24.177827       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:41.049194   13136 command_runner.go:130] ! I0203 12:27:24.178337       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0203 12:28:41.049240   13136 command_runner.go:130] ! W0203 12:27:24.178525       1 genericapiserver.go:767] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.049240   13136 command_runner.go:130] ! W0203 12:27:24.178534       1 genericapiserver.go:767] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:41.049289   13136 command_runner.go:130] ! I0203 12:27:24.179521       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0203 12:28:41.049289   13136 command_runner.go:130] ! W0203 12:27:24.179622       1 genericapiserver.go:767] Skipping API policy/v1beta1 because it has no resources.
	I0203 12:28:41.049334   13136 command_runner.go:130] ! I0203 12:27:24.181744       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0203 12:28:41.049334   13136 command_runner.go:130] ! W0203 12:27:24.181838       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.049334   13136 command_runner.go:130] ! W0203 12:27:24.181848       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:41.049383   13136 command_runner.go:130] ! I0203 12:27:24.182574       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0203 12:28:41.049383   13136 command_runner.go:130] ! W0203 12:27:24.182612       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.049430   13136 command_runner.go:130] ! W0203 12:27:24.182619       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:41.049430   13136 command_runner.go:130] ! I0203 12:27:24.185237       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0203 12:28:41.049479   13136 command_runner.go:130] ! W0203 12:27:24.185340       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.049479   13136 command_runner.go:130] ! W0203 12:27:24.185438       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:41.049524   13136 command_runner.go:130] ! I0203 12:27:24.187067       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0203 12:28:41.049524   13136 command_runner.go:130] ! W0203 12:27:24.187189       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta3 because it has no resources.
	I0203 12:28:41.049572   13136 command_runner.go:130] ! W0203 12:27:24.187200       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0203 12:28:41.049572   13136 command_runner.go:130] ! W0203 12:27:24.187204       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.049619   13136 command_runner.go:130] ! I0203 12:27:24.193311       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0203 12:28:41.049619   13136 command_runner.go:130] ! W0203 12:27:24.193504       1 genericapiserver.go:767] Skipping API apps/v1beta2 because it has no resources.
	I0203 12:28:41.049619   13136 command_runner.go:130] ! W0203 12:27:24.193516       1 genericapiserver.go:767] Skipping API apps/v1beta1 because it has no resources.
	I0203 12:28:41.049667   13136 command_runner.go:130] ! I0203 12:27:24.195828       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0203 12:28:41.049667   13136 command_runner.go:130] ! W0203 12:27:24.195943       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.049713   13136 command_runner.go:130] ! W0203 12:27:24.195952       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0203 12:28:41.049713   13136 command_runner.go:130] ! I0203 12:27:24.196821       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0203 12:28:41.049761   13136 command_runner.go:130] ! W0203 12:27:24.196925       1 genericapiserver.go:767] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.049761   13136 command_runner.go:130] ! I0203 12:27:24.210087       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0203 12:28:41.049807   13136 command_runner.go:130] ! W0203 12:27:24.210106       1 genericapiserver.go:767] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0203 12:28:41.049807   13136 command_runner.go:130] ! I0203 12:27:24.794572       1 secure_serving.go:213] Serving securely on [::]:8443
	I0203 12:28:41.049855   13136 command_runner.go:130] ! I0203 12:27:24.794794       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0203 12:28:41.049902   13136 command_runner.go:130] ! I0203 12:27:24.795068       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:41.049902   13136 command_runner.go:130] ! I0203 12:27:24.795407       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:41.049950   13136 command_runner.go:130] ! I0203 12:27:24.802046       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:41.049950   13136 command_runner.go:130] ! I0203 12:27:24.802388       1 local_available_controller.go:156] Starting LocalAvailability controller
	I0203 12:28:41.049995   13136 command_runner.go:130] ! I0203 12:27:24.802453       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I0203 12:28:41.049995   13136 command_runner.go:130] ! I0203 12:27:24.803591       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I0203 12:28:41.050044   13136 command_runner.go:130] ! I0203 12:27:24.803646       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0203 12:28:41.050044   13136 command_runner.go:130] ! I0203 12:27:24.803948       1 controller.go:78] Starting OpenAPI AggregationController
	I0203 12:28:41.050090   13136 command_runner.go:130] ! I0203 12:27:24.804549       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0203 12:28:41.050090   13136 command_runner.go:130] ! I0203 12:27:24.805072       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I0203 12:28:41.050090   13136 command_runner.go:130] ! I0203 12:27:24.805137       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I0203 12:28:41.050138   13136 command_runner.go:130] ! I0203 12:27:24.805149       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0203 12:28:41.050138   13136 command_runner.go:130] ! I0203 12:27:24.805622       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I0203 12:28:41.050184   13136 command_runner.go:130] ! I0203 12:27:24.805888       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I0203 12:28:41.050184   13136 command_runner.go:130] ! I0203 12:27:24.806059       1 aggregator.go:169] waiting for initial CRD sync...
	I0203 12:28:41.050234   13136 command_runner.go:130] ! I0203 12:27:24.806071       1 cluster_authentication_trust_controller.go:462] Starting cluster_authentication_trust_controller controller
	I0203 12:28:41.050234   13136 command_runner.go:130] ! I0203 12:27:24.806336       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0203 12:28:41.050280   13136 command_runner.go:130] ! I0203 12:27:24.815482       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:41.050280   13136 command_runner.go:130] ! I0203 12:27:24.815778       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:41.050280   13136 command_runner.go:130] ! I0203 12:27:24.857328       1 controller.go:142] Starting OpenAPI controller
	I0203 12:28:41.050328   13136 command_runner.go:130] ! I0203 12:27:24.857674       1 controller.go:90] Starting OpenAPI V3 controller
	I0203 12:28:41.050328   13136 command_runner.go:130] ! I0203 12:27:24.857889       1 naming_controller.go:294] Starting NamingConditionController
	I0203 12:28:41.050374   13136 command_runner.go:130] ! I0203 12:27:24.858090       1 establishing_controller.go:81] Starting EstablishingController
	I0203 12:28:41.050374   13136 command_runner.go:130] ! I0203 12:27:24.858264       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0203 12:28:41.050422   13136 command_runner.go:130] ! I0203 12:27:24.858511       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0203 12:28:41.050422   13136 command_runner.go:130] ! I0203 12:27:24.858696       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0203 12:28:41.050422   13136 command_runner.go:130] ! I0203 12:27:24.805624       1 controller.go:119] Starting legacy_token_tracking_controller
	I0203 12:28:41.050469   13136 command_runner.go:130] ! I0203 12:27:24.859559       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0203 12:28:41.050469   13136 command_runner.go:130] ! I0203 12:27:24.859779       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0203 12:28:41.050518   13136 command_runner.go:130] ! I0203 12:27:24.859901       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0203 12:28:41.050518   13136 command_runner.go:130] ! I0203 12:27:24.805642       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0203 12:28:41.050518   13136 command_runner.go:130] ! I0203 12:27:24.805842       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I0203 12:28:41.050572   13136 command_runner.go:130] ! I0203 12:27:24.960247       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0203 12:28:41.050572   13136 command_runner.go:130] ! I0203 12:27:24.962958       1 aggregator.go:171] initial CRD sync complete...
	I0203 12:28:41.050572   13136 command_runner.go:130] ! I0203 12:27:24.963020       1 autoregister_controller.go:144] Starting autoregister controller
	I0203 12:28:41.050621   13136 command_runner.go:130] ! I0203 12:27:24.963034       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0203 12:28:41.050667   13136 command_runner.go:130] ! I0203 12:27:24.983465       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0203 12:28:41.050667   13136 command_runner.go:130] ! I0203 12:27:24.983682       1 policy_source.go:240] refreshing policies
	I0203 12:28:41.050667   13136 command_runner.go:130] ! I0203 12:27:24.988524       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0203 12:28:41.050716   13136 command_runner.go:130] ! I0203 12:27:25.002635       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0203 12:28:41.050762   13136 command_runner.go:130] ! I0203 12:27:25.006114       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0203 12:28:41.050762   13136 command_runner.go:130] ! I0203 12:27:25.007504       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0203 12:28:41.050815   13136 command_runner.go:130] ! I0203 12:27:25.021232       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0203 12:28:41.050815   13136 command_runner.go:130] ! I0203 12:27:25.021549       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0203 12:28:41.050861   13136 command_runner.go:130] ! I0203 12:27:25.021784       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0203 12:28:41.050861   13136 command_runner.go:130] ! I0203 12:27:25.040252       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0203 12:28:41.050861   13136 command_runner.go:130] ! I0203 12:27:25.063391       1 cache.go:39] Caches are synced for autoregister controller
	I0203 12:28:41.050910   13136 command_runner.go:130] ! I0203 12:27:25.063942       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0203 12:28:41.050910   13136 command_runner.go:130] ! I0203 12:27:25.064322       1 shared_informer.go:320] Caches are synced for configmaps
	I0203 12:28:41.050910   13136 command_runner.go:130] ! I0203 12:27:25.809340       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0203 12:28:41.050962   13136 command_runner.go:130] ! I0203 12:27:25.881836       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0203 12:28:41.050962   13136 command_runner.go:130] ! W0203 12:27:26.443758       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.25.12.244]
	I0203 12:28:41.051011   13136 command_runner.go:130] ! I0203 12:27:26.447833       1 controller.go:615] quota admission added evaluator for: endpoints
	I0203 12:28:41.051011   13136 command_runner.go:130] ! I0203 12:27:26.461396       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0203 12:28:41.051011   13136 command_runner.go:130] ! I0203 12:27:27.972522       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0203 12:28:41.051056   13136 command_runner.go:130] ! I0203 12:27:28.290141       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0203 12:28:41.051056   13136 command_runner.go:130] ! I0203 12:27:28.509424       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0203 12:28:41.051106   13136 command_runner.go:130] ! I0203 12:27:28.520726       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0203 12:28:41.051106   13136 command_runner.go:130] ! I0203 12:27:28.561004       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0203 12:28:41.060764   13136 logs.go:123] Gathering logs for kube-scheduler [2e43c2ecb4a9] ...
	I0203 12:28:41.060764   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e43c2ecb4a9"
	I0203 12:28:41.091755   13136 command_runner.go:130] ! I0203 12:27:23.141470       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:41.091755   13136 command_runner.go:130] ! W0203 12:27:24.897433       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0203 12:28:41.091755   13136 command_runner.go:130] ! W0203 12:27:24.897513       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:41.091755   13136 command_runner.go:130] ! W0203 12:27:24.897526       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0203 12:28:41.091755   13136 command_runner.go:130] ! W0203 12:27:24.897538       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0203 12:28:41.091755   13136 command_runner.go:130] ! I0203 12:27:25.033204       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0203 12:28:41.091755   13136 command_runner.go:130] ! I0203 12:27:25.033541       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:41.091755   13136 command_runner.go:130] ! I0203 12:27:25.041065       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0203 12:28:41.091755   13136 command_runner.go:130] ! I0203 12:27:25.044977       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:41.091755   13136 command_runner.go:130] ! I0203 12:27:25.045234       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 12:28:41.091755   13136 command_runner.go:130] ! I0203 12:27:25.045638       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:41.091755   13136 command_runner.go:130] ! I0203 12:27:25.146094       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:41.094767   13136 logs.go:123] Gathering logs for kube-scheduler [88c40ca9aa3c] ...
	I0203 12:28:41.094839   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c40ca9aa3c"
	I0203 12:28:41.125303   13136 command_runner.go:130] ! I0203 12:04:50.173813       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:41.125786   13136 command_runner.go:130] ! W0203 12:04:52.061949       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0203 12:28:41.125950   13136 command_runner.go:130] ! W0203 12:04:52.062136       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:41.125950   13136 command_runner.go:130] ! W0203 12:04:52.062240       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0203 12:28:41.125950   13136 command_runner.go:130] ! W0203 12:04:52.062322       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0203 12:28:41.125950   13136 command_runner.go:130] ! I0203 12:04:52.183111       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0203 12:28:41.125950   13136 command_runner.go:130] ! I0203 12:04:52.183265       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:41.125950   13136 command_runner.go:130] ! I0203 12:04:52.186981       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0203 12:28:41.125950   13136 command_runner.go:130] ! I0203 12:04:52.187238       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 12:28:41.125950   13136 command_runner.go:130] ! I0203 12:04:52.187329       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:41.125950   13136 command_runner.go:130] ! I0203 12:04:52.190286       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:41.125950   13136 command_runner.go:130] ! W0203 12:04:52.193791       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0203 12:28:41.125950   13136 command_runner.go:130] ! E0203 12:04:52.193853       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.125950   13136 command_runner.go:130] ! W0203 12:04:52.194153       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0203 12:28:41.125950   13136 command_runner.go:130] ! E0203 12:04:52.194308       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.126481   13136 command_runner.go:130] ! W0203 12:04:52.194637       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:41.126481   13136 command_runner.go:130] ! E0203 12:04:52.195017       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.126579   13136 command_runner.go:130] ! W0203 12:04:52.194800       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0203 12:28:41.126625   13136 command_runner.go:130] ! E0203 12:04:52.195139       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.126665   13136 command_runner.go:130] ! W0203 12:04:52.194975       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0203 12:28:41.126665   13136 command_runner.go:130] ! E0203 12:04:52.195284       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.126736   13136 command_runner.go:130] ! W0203 12:04:52.196729       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0203 12:28:41.126736   13136 command_runner.go:130] ! E0203 12:04:52.197161       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.126736   13136 command_runner.go:130] ! W0203 12:04:52.196961       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0203 12:28:41.126857   13136 command_runner.go:130] ! E0203 12:04:52.197453       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.126857   13136 command_runner.go:130] ! W0203 12:04:52.197005       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:41.126919   13136 command_runner.go:130] ! E0203 12:04:52.197828       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.126919   13136 command_runner.go:130] ! W0203 12:04:52.197050       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0203 12:28:41.126981   13136 command_runner.go:130] ! E0203 12:04:52.198044       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.126981   13136 command_runner.go:130] ! W0203 12:04:52.197096       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0203 12:28:41.127050   13136 command_runner.go:130] ! E0203 12:04:52.198641       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.127050   13136 command_runner.go:130] ! W0203 12:04:52.200812       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:41.127135   13136 command_runner.go:130] ! E0203 12:04:52.201002       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0203 12:28:41.127135   13136 command_runner.go:130] ! W0203 12:04:52.201197       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0203 12:28:41.127205   13136 command_runner.go:130] ! E0203 12:04:52.201287       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.127269   13136 command_runner.go:130] ! W0203 12:04:52.201462       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:41.127269   13136 command_runner.go:130] ! E0203 12:04:52.201749       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.127336   13136 command_runner.go:130] ! W0203 12:04:52.203997       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0203 12:28:41.127336   13136 command_runner.go:130] ! E0203 12:04:52.204039       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.127402   13136 command_runner.go:130] ! W0203 12:04:52.204263       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:41.127402   13136 command_runner.go:130] ! E0203 12:04:52.204370       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.127471   13136 command_runner.go:130] ! W0203 12:04:52.204862       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:41.127471   13136 command_runner.go:130] ! E0203 12:04:52.205088       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.127547   13136 command_runner.go:130] ! W0203 12:04:53.007728       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:41.127616   13136 command_runner.go:130] ! E0203 12:04:53.008599       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.127616   13136 command_runner.go:130] ! W0203 12:04:53.048183       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0203 12:28:41.127681   13136 command_runner.go:130] ! E0203 12:04:53.048434       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.127681   13136 command_runner.go:130] ! W0203 12:04:53.164447       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0203 12:28:41.127751   13136 command_runner.go:130] ! E0203 12:04:53.165061       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.127751   13136 command_runner.go:130] ! W0203 12:04:53.169067       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0203 12:28:41.127824   13136 command_runner.go:130] ! E0203 12:04:53.169917       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.127824   13136 command_runner.go:130] ! W0203 12:04:53.247439       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:41.127895   13136 command_runner.go:130] ! E0203 12:04:53.247628       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.127895   13136 command_runner.go:130] ! W0203 12:04:53.427203       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0203 12:28:41.127977   13136 command_runner.go:130] ! E0203 12:04:53.427543       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.128047   13136 command_runner.go:130] ! W0203 12:04:53.471735       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:41.128114   13136 command_runner.go:130] ! E0203 12:04:53.471980       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.128173   13136 command_runner.go:130] ! W0203 12:04:53.482216       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0203 12:28:41.128244   13136 command_runner.go:130] ! E0203 12:04:53.482267       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.128244   13136 command_runner.go:130] ! W0203 12:04:53.497579       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0203 12:28:41.128290   13136 command_runner.go:130] ! E0203 12:04:53.497628       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.128337   13136 command_runner.go:130] ! W0203 12:04:53.544588       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0203 12:28:41.128383   13136 command_runner.go:130] ! E0203 12:04:53.545097       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0203 12:28:41.128383   13136 command_runner.go:130] ! W0203 12:04:53.614992       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0203 12:28:41.128423   13136 command_runner.go:130] ! E0203 12:04:53.615323       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.128468   13136 command_runner.go:130] ! W0203 12:04:53.655102       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0203 12:28:41.128508   13136 command_runner.go:130] ! E0203 12:04:53.655499       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.128508   13136 command_runner.go:130] ! W0203 12:04:53.655303       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0203 12:28:41.128595   13136 command_runner.go:130] ! E0203 12:04:53.656094       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.128595   13136 command_runner.go:130] ! W0203 12:04:53.713710       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:41.128680   13136 command_runner.go:130] ! E0203 12:04:53.713767       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.128680   13136 command_runner.go:130] ! W0203 12:04:53.764352       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0203 12:28:41.128724   13136 command_runner.go:130] ! E0203 12:04:53.764706       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.128765   13136 command_runner.go:130] ! W0203 12:04:53.799751       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0203 12:28:41.128811   13136 command_runner.go:130] ! E0203 12:04:53.800034       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:28:41.128851   13136 command_runner.go:130] ! I0203 12:04:56.288855       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:28:41.128851   13136 command_runner.go:130] ! I0203 12:25:02.182209       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0203 12:28:41.128897   13136 command_runner.go:130] ! I0203 12:25:02.205551       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 12:28:41.128897   13136 command_runner.go:130] ! I0203 12:25:02.205980       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0203 12:28:41.128897   13136 command_runner.go:130] ! E0203 12:25:02.233103       1 run.go:72] "command failed" err="finished without leader elect"
	I0203 12:28:41.141989   13136 logs.go:123] Gathering logs for kube-proxy [cf33452e7244] ...
	I0203 12:28:41.142983   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf33452e7244"
	I0203 12:28:41.170977   13136 command_runner.go:130] ! I0203 12:27:27.874759       1 server_linux.go:66] "Using iptables proxy"
	I0203 12:28:41.170977   13136 command_runner.go:130] ! E0203 12:27:28.000541       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:41.170977   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0203 12:28:41.171399   13136 command_runner.go:130] ! 	add table ip kube-proxy
	I0203 12:28:41.171399   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:41.171399   13136 command_runner.go:130] !  >
	I0203 12:28:41.171399   13136 command_runner.go:130] ! E0203 12:27:28.027381       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0203 12:28:41.171399   13136 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0203 12:28:41.171510   13136 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0203 12:28:41.171533   13136 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:41.171533   13136 command_runner.go:130] !  >
	I0203 12:28:41.171621   13136 command_runner.go:130] ! I0203 12:27:28.187333       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.12.244"]
	I0203 12:28:41.171621   13136 command_runner.go:130] ! E0203 12:27:28.189467       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0203 12:28:41.171669   13136 command_runner.go:130] ! I0203 12:27:28.571807       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0203 12:28:41.171669   13136 command_runner.go:130] ! I0203 12:27:28.573724       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0203 12:28:41.171777   13136 command_runner.go:130] ! I0203 12:27:28.574028       1 server_linux.go:170] "Using iptables Proxier"
	I0203 12:28:41.171843   13136 command_runner.go:130] ! I0203 12:27:28.580953       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0203 12:28:41.171843   13136 command_runner.go:130] ! I0203 12:27:28.586727       1 server.go:497] "Version info" version="v1.32.1"
	I0203 12:28:41.171843   13136 command_runner.go:130] ! I0203 12:27:28.590708       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:41.171927   13136 command_runner.go:130] ! I0203 12:27:28.619546       1 config.go:199] "Starting service config controller"
	I0203 12:28:41.171927   13136 command_runner.go:130] ! I0203 12:27:28.621538       1 config.go:105] "Starting endpoint slice config controller"
	I0203 12:28:41.171927   13136 command_runner.go:130] ! I0203 12:27:28.621733       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0203 12:28:41.171927   13136 command_runner.go:130] ! I0203 12:27:28.623181       1 config.go:329] "Starting node config controller"
	I0203 12:28:41.172003   13136 command_runner.go:130] ! I0203 12:27:28.623915       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0203 12:28:41.172003   13136 command_runner.go:130] ! I0203 12:27:28.626746       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0203 12:28:41.172043   13136 command_runner.go:130] ! I0203 12:27:28.627120       1 shared_informer.go:320] Caches are synced for service config
	I0203 12:28:41.172043   13136 command_runner.go:130] ! I0203 12:27:28.722206       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0203 12:28:41.172043   13136 command_runner.go:130] ! I0203 12:27:28.724853       1 shared_informer.go:320] Caches are synced for node config
	I0203 12:28:41.176171   13136 logs.go:123] Gathering logs for coredns [edb5f00f1042] ...
	I0203 12:28:41.176171   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edb5f00f1042"
	I0203 12:28:41.205878   13136 command_runner.go:130] > .:53
	I0203 12:28:41.205931   13136 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3e8130cfa8e96169e54fdb81903f9b4680c96074b93281de316a617894d613269c265db78cbf1be00f04df6f27627d689838921ad115c7f1fadc26b632a43f17
	I0203 12:28:41.205931   13136 command_runner.go:130] > CoreDNS-1.11.3
	I0203 12:28:41.205980   13136 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0203 12:28:41.205980   13136 command_runner.go:130] > [INFO] 127.0.0.1:49536 - 20223 "HINFO IN 8316577845745372206.6425600211286211531. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049207769s
	I0203 12:28:41.206466   13136 logs.go:123] Gathering logs for kube-controller-manager [fa5ab1df8985] ...
	I0203 12:28:41.206514   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa5ab1df8985"
	I0203 12:28:41.236084   13136 command_runner.go:130] ! I0203 12:27:22.909691       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:41.236084   13136 command_runner.go:130] ! I0203 12:27:23.402652       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0203 12:28:41.236531   13136 command_runner.go:130] ! I0203 12:27:23.402986       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:41.236531   13136 command_runner.go:130] ! I0203 12:27:23.406564       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:41.236531   13136 command_runner.go:130] ! I0203 12:27:23.406976       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:41.236531   13136 command_runner.go:130] ! I0203 12:27:23.407714       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0203 12:28:41.236531   13136 command_runner.go:130] ! I0203 12:27:23.407940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:41.236625   13136 command_runner.go:130] ! I0203 12:27:26.898379       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0203 12:28:41.236625   13136 command_runner.go:130] ! I0203 12:27:26.903089       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0203 12:28:41.236625   13136 command_runner.go:130] ! I0203 12:27:26.920491       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0203 12:28:41.236625   13136 command_runner.go:130] ! I0203 12:27:26.921386       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0203 12:28:41.236625   13136 command_runner.go:130] ! I0203 12:27:26.921411       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0203 12:28:41.236745   13136 command_runner.go:130] ! I0203 12:27:26.927675       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0203 12:28:41.236745   13136 command_runner.go:130] ! I0203 12:27:26.928004       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0203 12:28:41.236745   13136 command_runner.go:130] ! I0203 12:27:26.928034       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0203 12:28:41.236745   13136 command_runner.go:130] ! I0203 12:27:26.930586       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0203 12:28:41.236833   13136 command_runner.go:130] ! I0203 12:27:26.930784       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:26.930813       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:26.933480       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:26.933510       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:26.933688       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:26.937614       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:26.937802       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:26.937815       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:26.941806       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:26.942027       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:26.942037       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0203 12:28:41.236861   13136 command_runner.go:130] ! W0203 12:27:26.985553       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:27.000401       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:27.000471       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:27.002441       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:27.002463       1 shared_informer.go:313] Waiting for caches to sync for node
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:27.005161       1 shared_informer.go:320] Caches are synced for tokens
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:27.005494       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:27.005531       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:27.006525       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0203 12:28:41.236861   13136 command_runner.go:130] ! I0203 12:27:27.006554       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0203 12:28:41.237442   13136 command_runner.go:130] ! I0203 12:27:27.006561       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0203 12:28:41.237442   13136 command_runner.go:130] ! I0203 12:27:27.018211       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0203 12:28:41.237442   13136 command_runner.go:130] ! I0203 12:27:27.020298       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:41.237493   13136 command_runner.go:130] ! I0203 12:27:27.020315       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0203 12:28:41.237567   13136 command_runner.go:130] ! I0203 12:27:27.020476       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:41.237567   13136 command_runner.go:130] ! I0203 12:27:27.020496       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0203 12:28:41.237567   13136 command_runner.go:130] ! I0203 12:27:27.020523       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0203 12:28:41.237632   13136 command_runner.go:130] ! I0203 12:27:27.020531       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0203 12:28:41.237632   13136 command_runner.go:130] ! I0203 12:27:27.035455       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0203 12:28:41.237632   13136 command_runner.go:130] ! I0203 12:27:27.035474       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0203 12:28:41.237702   13136 command_runner.go:130] ! I0203 12:27:27.036405       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0203 12:28:41.237702   13136 command_runner.go:130] ! I0203 12:27:27.036423       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0203 12:28:41.237702   13136 command_runner.go:130] ! I0203 12:27:27.036035       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0203 12:28:41.237702   13136 command_runner.go:130] ! I0203 12:27:27.044089       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0203 12:28:41.237797   13136 command_runner.go:130] ! I0203 12:27:27.044099       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0203 12:28:41.237797   13136 command_runner.go:130] ! I0203 12:27:27.055692       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0203 12:28:41.237797   13136 command_runner.go:130] ! I0203 12:27:27.056054       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0203 12:28:41.237797   13136 command_runner.go:130] ! I0203 12:27:27.056069       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0203 12:28:41.237797   13136 command_runner.go:130] ! I0203 12:27:27.078626       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0203 12:28:41.237867   13136 command_runner.go:130] ! I0203 12:27:27.078816       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0203 12:28:41.237904   13136 command_runner.go:130] ! I0203 12:27:27.078939       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0203 12:28:41.237947   13136 command_runner.go:130] ! I0203 12:27:27.078953       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.092379       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.092403       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.092472       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.093806       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.094076       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.094201       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.094716       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.095015       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.095085       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.095525       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.095975       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.095995       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.096141       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.105052       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.108021       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.108044       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.108849       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.111028       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.111046       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.178113       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.178273       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.181884       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.182308       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.182384       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.182422       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.220586       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.220908       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.221122       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0203 12:28:41.237974   13136 command_runner.go:130] ! I0203 12:27:27.254107       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0203 12:28:41.238523   13136 command_runner.go:130] ! I0203 12:27:27.259526       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0203 12:28:41.238523   13136 command_runner.go:130] ! I0203 12:27:27.259566       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0203 12:28:41.238523   13136 command_runner.go:130] ! I0203 12:27:27.259616       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0203 12:28:41.238523   13136 command_runner.go:130] ! I0203 12:27:27.259642       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0203 12:28:41.238611   13136 command_runner.go:130] ! W0203 12:27:27.259665       1 shared_informer.go:597] resyncPeriod 16h18m36.581327018s is smaller than resyncCheckPeriod 16h18m48.925429448s and the informer has already started. Changing it to 16h18m48.925429448s
	I0203 12:28:41.238655   13136 command_runner.go:130] ! I0203 12:27:27.259798       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0203 12:28:41.238697   13136 command_runner.go:130] ! I0203 12:27:27.259831       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0203 12:28:41.238697   13136 command_runner.go:130] ! I0203 12:27:27.259851       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0203 12:28:41.238743   13136 command_runner.go:130] ! I0203 12:27:27.259880       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0203 12:28:41.238784   13136 command_runner.go:130] ! I0203 12:27:27.259900       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0203 12:28:41.238819   13136 command_runner.go:130] ! I0203 12:27:27.259918       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0203 12:28:41.238819   13136 command_runner.go:130] ! I0203 12:27:27.259931       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0203 12:28:41.238859   13136 command_runner.go:130] ! I0203 12:27:27.259951       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0203 12:28:41.238894   13136 command_runner.go:130] ! I0203 12:27:27.259973       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0203 12:28:41.238933   13136 command_runner.go:130] ! I0203 12:27:27.259996       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0203 12:28:41.238975   13136 command_runner.go:130] ! I0203 12:27:27.260019       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0203 12:28:41.239015   13136 command_runner.go:130] ! I0203 12:27:27.260033       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0203 12:28:41.239057   13136 command_runner.go:130] ! W0203 12:27:27.260043       1 shared_informer.go:597] resyncPeriod 12h21m15.604254037s is smaller than resyncCheckPeriod 16h18m48.925429448s and the informer has already started. Changing it to 16h18m48.925429448s
	I0203 12:28:41.239057   13136 command_runner.go:130] ! I0203 12:27:27.260097       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0203 12:28:41.239097   13136 command_runner.go:130] ! I0203 12:27:27.260171       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0203 12:28:41.239137   13136 command_runner.go:130] ! I0203 12:27:27.260229       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0203 12:28:41.239176   13136 command_runner.go:130] ! I0203 12:27:27.260265       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0203 12:28:41.239211   13136 command_runner.go:130] ! I0203 12:27:27.260486       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0203 12:28:41.239250   13136 command_runner.go:130] ! I0203 12:27:27.260501       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:41.239285   13136 command_runner.go:130] ! I0203 12:27:27.260524       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0203 12:28:41.239325   13136 command_runner.go:130] ! I0203 12:27:27.267963       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0203 12:28:41.239366   13136 command_runner.go:130] ! I0203 12:27:27.267980       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0203 12:28:41.239405   13136 command_runner.go:130] ! I0203 12:27:27.268261       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0203 12:28:41.239440   13136 command_runner.go:130] ! I0203 12:27:27.268271       1 shared_informer.go:313] Waiting for caches to sync for job
	I0203 12:28:41.239479   13136 command_runner.go:130] ! I0203 12:27:27.275304       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0203 12:28:41.239520   13136 command_runner.go:130] ! I0203 12:27:27.275791       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0203 12:28:41.239560   13136 command_runner.go:130] ! I0203 12:27:27.275805       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0203 12:28:41.239595   13136 command_runner.go:130] ! I0203 12:27:27.282846       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0203 12:28:41.239595   13136 command_runner.go:130] ! I0203 12:27:27.285688       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0203 12:28:41.239635   13136 command_runner.go:130] ! I0203 12:27:27.285931       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0203 12:28:41.239675   13136 command_runner.go:130] ! I0203 12:27:27.285943       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0203 12:28:41.239675   13136 command_runner.go:130] ! I0203 12:27:27.285971       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0203 12:28:41.239715   13136 command_runner.go:130] ! I0203 12:27:27.285981       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0203 12:28:41.239715   13136 command_runner.go:130] ! I0203 12:27:27.294816       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0203 12:28:41.239749   13136 command_runner.go:130] ! I0203 12:27:27.294925       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0203 12:28:41.239789   13136 command_runner.go:130] ! I0203 12:27:27.294936       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0203 12:28:41.239823   13136 command_runner.go:130] ! I0203 12:27:27.318951       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0203 12:28:41.239863   13136 command_runner.go:130] ! I0203 12:27:27.319030       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0203 12:28:41.239904   13136 command_runner.go:130] ! I0203 12:27:27.319040       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0203 12:28:41.239904   13136 command_runner.go:130] ! I0203 12:27:27.355026       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0203 12:28:41.239944   13136 command_runner.go:130] ! I0203 12:27:27.355145       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0203 12:28:41.239944   13136 command_runner.go:130] ! I0203 12:27:27.355157       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0203 12:28:41.239985   13136 command_runner.go:130] ! I0203 12:27:27.502334       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0203 12:28:41.240025   13136 command_runner.go:130] ! I0203 12:27:27.502612       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:41.240025   13136 command_runner.go:130] ! I0203 12:27:27.503231       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0203 12:28:41.240065   13136 command_runner.go:130] ! I0203 12:27:27.503509       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0203 12:28:41.240065   13136 command_runner.go:130] ! I0203 12:27:27.601804       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.601861       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.702241       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.702332       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.702378       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.702389       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.752020       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.752619       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.752706       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.803085       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.803455       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.803481       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.855074       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.855248       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.855184       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.855399       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.906335       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.906694       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.906991       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.907151       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.952285       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.952811       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:27.953099       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:28.007756       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:28.008110       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:28.008081       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:28.008316       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:28.056312       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:28.059984       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:28.060009       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0203 12:28:41.240113   13136 command_runner.go:130] ! I0203 12:27:28.076985       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:41.240657   13136 command_runner.go:130] ! I0203 12:27:28.123054       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300\" does not exist"
	I0203 12:28:41.240657   13136 command_runner.go:130] ! I0203 12:27:28.125466       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m02\" does not exist"
	I0203 12:28:41.240657   13136 command_runner.go:130] ! I0203 12:27:28.127487       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m03\" does not exist"
	I0203 12:28:41.240657   13136 command_runner.go:130] ! I0203 12:27:28.128305       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0203 12:28:41.240768   13136 command_runner.go:130] ! I0203 12:27:28.130715       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:41.240768   13136 command_runner.go:130] ! I0203 12:27:28.131611       1 shared_informer.go:320] Caches are synced for cronjob
	I0203 12:28:41.240768   13136 command_runner.go:130] ! I0203 12:27:28.137580       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0203 12:28:41.240809   13136 command_runner.go:130] ! I0203 12:27:28.142883       1 shared_informer.go:320] Caches are synced for TTL
	I0203 12:28:41.240809   13136 command_runner.go:130] ! I0203 12:27:28.155436       1 shared_informer.go:320] Caches are synced for daemon sets
	I0203 12:28:41.240866   13136 command_runner.go:130] ! I0203 12:27:28.169742       1 shared_informer.go:320] Caches are synced for crt configmap
	I0203 12:28:41.240866   13136 command_runner.go:130] ! I0203 12:27:28.178458       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0203 12:28:41.240866   13136 command_runner.go:130] ! I0203 12:27:28.179559       1 shared_informer.go:320] Caches are synced for job
	I0203 12:28:41.240866   13136 command_runner.go:130] ! I0203 12:27:28.184280       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0203 12:28:41.240866   13136 command_runner.go:130] ! I0203 12:27:28.184866       1 shared_informer.go:320] Caches are synced for endpoint
	I0203 12:28:41.240936   13136 command_runner.go:130] ! I0203 12:27:28.185203       1 shared_informer.go:320] Caches are synced for persistent volume
	I0203 12:28:41.240936   13136 command_runner.go:130] ! I0203 12:27:28.188183       1 shared_informer.go:320] Caches are synced for disruption
	I0203 12:28:41.240936   13136 command_runner.go:130] ! I0203 12:27:28.191185       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0203 12:28:41.240936   13136 command_runner.go:130] ! I0203 12:27:28.192463       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0203 12:28:41.240997   13136 command_runner.go:130] ! I0203 12:27:28.192932       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0203 12:28:41.240997   13136 command_runner.go:130] ! I0203 12:27:28.195813       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:41.240997   13136 command_runner.go:130] ! I0203 12:27:28.197022       1 shared_informer.go:320] Caches are synced for expand
	I0203 12:28:41.241055   13136 command_runner.go:130] ! I0203 12:27:28.197371       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0203 12:28:41.241055   13136 command_runner.go:130] ! I0203 12:27:28.203607       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0203 12:28:41.241055   13136 command_runner.go:130] ! I0203 12:27:28.205940       1 shared_informer.go:320] Caches are synced for node
	I0203 12:28:41.241055   13136 command_runner.go:130] ! I0203 12:27:28.206428       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0203 12:28:41.241128   13136 command_runner.go:130] ! I0203 12:27:28.206719       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0203 12:28:41.241128   13136 command_runner.go:130] ! I0203 12:27:28.206743       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0203 12:28:41.241128   13136 command_runner.go:130] ! I0203 12:27:28.206759       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0203 12:28:41.241183   13136 command_runner.go:130] ! I0203 12:27:28.207125       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.241183   13136 command_runner.go:130] ! I0203 12:27:28.207167       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.241183   13136 command_runner.go:130] ! I0203 12:27:28.207249       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.241183   13136 command_runner.go:130] ! I0203 12:27:28.207497       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0203 12:28:41.241262   13136 command_runner.go:130] ! I0203 12:27:28.212287       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0203 12:28:41.241262   13136 command_runner.go:130] ! I0203 12:27:28.212651       1 shared_informer.go:320] Caches are synced for taint
	I0203 12:28:41.241301   13136 command_runner.go:130] ! I0203 12:27:28.216545       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0203 12:28:41.241301   13136 command_runner.go:130] ! I0203 12:27:28.213230       1 shared_informer.go:320] Caches are synced for GC
	I0203 12:28:41.241301   13136 command_runner.go:130] ! I0203 12:27:28.220697       1 shared_informer.go:320] Caches are synced for PV protection
	I0203 12:28:41.241301   13136 command_runner.go:130] ! I0203 12:27:28.221685       1 shared_informer.go:320] Caches are synced for namespace
	I0203 12:28:41.241354   13136 command_runner.go:130] ! I0203 12:27:28.223956       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0203 12:28:41.241354   13136 command_runner.go:130] ! I0203 12:27:28.214977       1 shared_informer.go:320] Caches are synced for ephemeral
	I0203 12:28:41.241354   13136 command_runner.go:130] ! I0203 12:27:28.215855       1 shared_informer.go:320] Caches are synced for attach detach
	I0203 12:28:41.241354   13136 command_runner.go:130] ! I0203 12:27:28.229339       1 shared_informer.go:320] Caches are synced for deployment
	I0203 12:28:41.241410   13136 command_runner.go:130] ! I0203 12:27:28.231152       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:41.241410   13136 command_runner.go:130] ! I0203 12:27:28.240053       1 shared_informer.go:320] Caches are synced for stateful set
	I0203 12:28:41.241470   13136 command_runner.go:130] ! I0203 12:27:28.244571       1 shared_informer.go:320] Caches are synced for HPA
	I0203 12:28:41.241470   13136 command_runner.go:130] ! I0203 12:27:28.253632       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0203 12:28:41.241470   13136 command_runner.go:130] ! I0203 12:27:28.253905       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:41.241525   13136 command_runner.go:130] ! I0203 12:27:28.254335       1 shared_informer.go:320] Caches are synced for PVC protection
	I0203 12:28:41.241525   13136 command_runner.go:130] ! I0203 12:27:28.256579       1 shared_informer.go:320] Caches are synced for service account
	I0203 12:28:41.241525   13136 command_runner.go:130] ! I0203 12:27:28.261559       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:41.241525   13136 command_runner.go:130] ! I0203 12:27:28.272196       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.241585   13136 command_runner.go:130] ! I0203 12:27:28.278627       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m02"
	I0203 12:28:41.241585   13136 command_runner.go:130] ! I0203 12:27:28.278875       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m03"
	I0203 12:28:41.241654   13136 command_runner.go:130] ! I0203 12:27:28.279161       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300"
	I0203 12:28:41.241654   13136 command_runner.go:130] ! I0203 12:27:28.279427       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:41.241654   13136 command_runner.go:130] ! I0203 12:27:28.279877       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.241719   13136 command_runner.go:130] ! I0203 12:27:28.279830       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0203 12:28:41.241719   13136 command_runner.go:130] ! I0203 12:27:28.304983       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:41.241719   13136 command_runner.go:130] ! I0203 12:27:28.305231       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0203 12:28:41.241777   13136 command_runner.go:130] ! I0203 12:27:28.305564       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0203 12:28:41.241777   13136 command_runner.go:130] ! I0203 12:27:28.321623       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0203 12:28:41.241877   13136 command_runner.go:130] ! I0203 12:27:28.355620       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.241987   13136 command_runner.go:130] ! I0203 12:27:28.537851       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="345.769991ms"
	I0203 12:28:41.241987   13136 command_runner.go:130] ! I0203 12:27:28.538124       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="123.5µs"
	I0203 12:28:41.242048   13136 command_runner.go:130] ! I0203 12:27:28.549449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="358.01756ms"
	I0203 12:28:41.242048   13136 command_runner.go:130] ! I0203 12:27:28.551039       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="41.301µs"
	I0203 12:28:41.242048   13136 command_runner.go:130] ! I0203 12:27:38.365008       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.242108   13136 command_runner.go:130] ! I0203 12:28:10.033136       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.242108   13136 command_runner.go:130] ! I0203 12:28:10.034663       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:41.242169   13136 command_runner.go:130] ! I0203 12:28:10.065494       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.242169   13136 command_runner.go:130] ! I0203 12:28:13.309331       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.242169   13136 command_runner.go:130] ! I0203 12:28:18.332821       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.242225   13136 command_runner.go:130] ! I0203 12:28:18.352713       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.242225   13136 command_runner.go:130] ! I0203 12:28:18.408588       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="26.468372ms"
	I0203 12:28:41.242225   13136 command_runner.go:130] ! I0203 12:28:18.409083       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="46.101µs"
	I0203 12:28:41.242289   13136 command_runner.go:130] ! I0203 12:28:23.502598       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.242289   13136 command_runner.go:130] ! I0203 12:28:31.524388       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="21.544593ms"
	I0203 12:28:41.242346   13136 command_runner.go:130] ! I0203 12:28:31.524629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="171.802µs"
	I0203 12:28:41.242346   13136 command_runner.go:130] ! I0203 12:28:31.550980       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="91.601µs"
	I0203 12:28:41.242346   13136 command_runner.go:130] ! I0203 12:28:31.616132       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="36.896902ms"
	I0203 12:28:41.242407   13136 command_runner.go:130] ! I0203 12:28:31.618203       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="115.002µs"
	I0203 12:28:41.260313   13136 logs.go:123] Gathering logs for kindnet [644890f5738e] ...
	I0203 12:28:41.260313   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 644890f5738e"
	I0203 12:28:41.290530   13136 command_runner.go:130] ! I0203 12:27:27.922584       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0203 12:28:41.290530   13136 command_runner.go:130] ! I0203 12:27:27.925544       1 main.go:139] hostIP = 172.25.12.244
	I0203 12:28:41.290530   13136 command_runner.go:130] ! podIP = 172.25.12.244
	I0203 12:28:41.290530   13136 command_runner.go:130] ! I0203 12:27:27.925723       1 main.go:148] setting mtu 1500 for CNI 
	I0203 12:28:41.290530   13136 command_runner.go:130] ! I0203 12:27:27.925791       1 main.go:178] kindnetd IP family: "ipv4"
	I0203 12:28:41.290530   13136 command_runner.go:130] ! I0203 12:27:27.925960       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0203 12:28:41.290530   13136 command_runner.go:130] ! I0203 12:27:28.656536       1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
	I0203 12:28:41.290530   13136 command_runner.go:130] ! add table inet kindnet-network-policies
	I0203 12:28:41.290530   13136 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I0203 12:28:41.290530   13136 command_runner.go:130] ! , skipping network policies
	I0203 12:28:41.290530   13136 command_runner.go:130] ! W0203 12:27:58.664159       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0203 12:28:41.290530   13136 command_runner.go:130] ! E0203 12:27:58.664461       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I0203 12:28:41.290530   13136 command_runner.go:130] ! I0203 12:28:08.665271       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:28:41.290530   13136 command_runner.go:130] ! I0203 12:28:08.665332       1 main.go:301] handling current node
	I0203 12:28:41.290530   13136 command_runner.go:130] ! I0203 12:28:08.666606       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:41.290530   13136 command_runner.go:130] ! I0203 12:28:08.666704       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:41.291546   13136 command_runner.go:130] ! I0203 12:28:08.667036       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.25.8.35 Flags: [] Table: 0 Realm: 0} 
	I0203 12:28:41.291692   13136 command_runner.go:130] ! I0203 12:28:08.667510       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:41.291692   13136 command_runner.go:130] ! I0203 12:28:08.667530       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:41.291692   13136 command_runner.go:130] ! I0203 12:28:08.668238       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.25.0.54 Flags: [] Table: 0 Realm: 0} 
	I0203 12:28:41.291770   13136 command_runner.go:130] ! I0203 12:28:18.657872       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:41.291770   13136 command_runner.go:130] ! I0203 12:28:18.658001       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:41.291770   13136 command_runner.go:130] ! I0203 12:28:18.658271       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:28:41.291770   13136 command_runner.go:130] ! I0203 12:28:18.658397       1 main.go:301] handling current node
	I0203 12:28:41.291770   13136 command_runner.go:130] ! I0203 12:28:18.658413       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:41.291770   13136 command_runner.go:130] ! I0203 12:28:18.658420       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:41.291770   13136 command_runner.go:130] ! I0203 12:28:28.657620       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:28:41.291770   13136 command_runner.go:130] ! I0203 12:28:28.658189       1 main.go:301] handling current node
	I0203 12:28:41.291770   13136 command_runner.go:130] ! I0203 12:28:28.658424       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:41.291770   13136 command_runner.go:130] ! I0203 12:28:28.658517       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:41.291916   13136 command_runner.go:130] ! I0203 12:28:28.658702       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:41.291916   13136 command_runner.go:130] ! I0203 12:28:28.659037       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:41.291916   13136 command_runner.go:130] ! I0203 12:28:38.660508       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:28:41.291916   13136 command_runner.go:130] ! I0203 12:28:38.660637       1 main.go:301] handling current node
	I0203 12:28:41.291916   13136 command_runner.go:130] ! I0203 12:28:38.660667       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:28:41.292010   13136 command_runner.go:130] ! I0203 12:28:38.660675       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:28:41.292010   13136 command_runner.go:130] ! I0203 12:28:38.661328       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:28:41.292010   13136 command_runner.go:130] ! I0203 12:28:38.661463       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:28:41.294432   13136 logs.go:123] Gathering logs for Docker ...
	I0203 12:28:41.294506   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0203 12:28:41.326762   13136 command_runner.go:130] > Feb 03 12:25:59 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:41.326762   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:41.326762   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:41.326853   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:41.326853   13136 command_runner.go:130] > Feb 03 12:25:59 minikube cri-dockerd[225]: time="2025-02-03T12:25:59Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0203 12:28:41.326914   13136 command_runner.go:130] > Feb 03 12:26:00 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:41.326956   13136 command_runner.go:130] > Feb 03 12:26:00 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:41.326956   13136 command_runner.go:130] > Feb 03 12:26:00 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:41.326956   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0203 12:28:41.326956   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0203 12:28:41.327044   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:41.327044   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:41.327044   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:41.327044   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:41.327044   13136 command_runner.go:130] > Feb 03 12:26:02 minikube cri-dockerd[416]: time="2025-02-03T12:26:02Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0203 12:28:41.327044   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:41.327157   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:41.327176   13136 command_runner.go:130] > Feb 03 12:26:02 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:41.327176   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:04 minikube cri-dockerd[424]: time="2025-02-03T12:26:04Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:04 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:07 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 systemd[1]: Starting Docker Application Container Engine...
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[651]: time="2025-02-03T12:26:45.380727146Z" level=info msg="Starting up"
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[651]: time="2025-02-03T12:26:45.381865516Z" level=info msg="containerd not running, starting managed containerd"
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[651]: time="2025-02-03T12:26:45.382773073Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=657
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.412550323Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440135738Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440206542Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440329250Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.440352551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441207804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441394816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441695635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441819442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441843144Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:41.327227   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.441855545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.327770   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.442535887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.327770   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.443428142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.327770   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.446651543Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:41.327946   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.446752549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.328015   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.446913259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:41.328015   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.447005465Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0203 12:28:41.328015   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.447482194Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0203 12:28:41.328082   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.447592401Z" level=info msg="metadata content store policy set" policy=shared
	I0203 12:28:41.328082   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452471104Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0203 12:28:41.328082   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452580211Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0203 12:28:41.328148   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452605613Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0203 12:28:41.328148   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452624714Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0203 12:28:41.328148   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452641915Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0203 12:28:41.328216   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.452717520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0203 12:28:41.328216   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453010238Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0203 12:28:41.328216   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453128145Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0203 12:28:41.328282   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453147046Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0203 12:28:41.328282   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453162147Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0203 12:28:41.328348   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453177448Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.328348   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453199850Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.328348   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453215851Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.328415   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453237552Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.328415   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453360460Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.328415   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453415663Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.328415   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453522870Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.328481   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453541271Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.328497   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453563972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328546   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453580773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328546   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453596174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328581   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453611675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328581   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453625276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328581   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453640377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328657   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453653878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328657   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453667779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328657   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453687080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328657   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453703481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328730   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453716682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328730   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453729883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328730   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453743884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328797   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453761485Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0203 12:28:41.328797   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453785086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328797   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453804587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.328864   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453818788Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0203 12:28:41.328864   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453867591Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0203 12:28:41.328864   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.453971798Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0203 12:28:41.328951   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454021201Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0203 12:28:41.328978   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454132008Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0203 12:28:41.329006   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454147409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.329080   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454163610Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0203 12:28:41.329080   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454175210Z" level=info msg="NRI interface is disabled by configuration."
	I0203 12:28:41.329080   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454622938Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0203 12:28:41.329151   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454857953Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0203 12:28:41.329151   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.454980660Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0203 12:28:41.329151   13136 command_runner.go:130] > Feb 03 12:26:45 multinode-749300 dockerd[657]: time="2025-02-03T12:26:45.455105168Z" level=info msg="containerd successfully booted in 0.044680s"
	I0203 12:28:41.329222   13136 command_runner.go:130] > Feb 03 12:26:46 multinode-749300 dockerd[651]: time="2025-02-03T12:26:46.439313185Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0203 12:28:41.329222   13136 command_runner.go:130] > Feb 03 12:26:46 multinode-749300 dockerd[651]: time="2025-02-03T12:26:46.630975852Z" level=info msg="Loading containers: start."
	I0203 12:28:41.329222   13136 command_runner.go:130] > Feb 03 12:26:46 multinode-749300 dockerd[651]: time="2025-02-03T12:26:46.949194693Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0203 12:28:41.329288   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.095120348Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0203 12:28:41.329288   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.212617937Z" level=info msg="Loading containers: done."
	I0203 12:28:41.329288   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.238410035Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0203 12:28:41.329359   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.238496541Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0203 12:28:41.329359   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.238529943Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0203 12:28:41.329424   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.239396503Z" level=info msg="Daemon has completed initialization"
	I0203 12:28:41.329424   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.279910027Z" level=info msg="API listen on /var/run/docker.sock"
	I0203 12:28:41.329424   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 systemd[1]: Started Docker Application Container Engine.
	I0203 12:28:41.329424   13136 command_runner.go:130] > Feb 03 12:26:47 multinode-749300 dockerd[651]: time="2025-02-03T12:26:47.280075738Z" level=info msg="API listen on [::]:2376"
	I0203 12:28:41.329493   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.298017161Z" level=info msg="Processing signal 'terminated'"
	I0203 12:28:41.329493   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 systemd[1]: Stopping Docker Application Container Engine...
	I0203 12:28:41.329493   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.300466075Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0203 12:28:41.329493   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.301181479Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0203 12:28:41.329568   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.301265080Z" level=info msg="Daemon shutdown complete"
	I0203 12:28:41.329568   13136 command_runner.go:130] > Feb 03 12:27:11 multinode-749300 dockerd[651]: time="2025-02-03T12:27:11.301434281Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0203 12:28:41.329568   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 systemd[1]: docker.service: Deactivated successfully.
	I0203 12:28:41.329568   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 systemd[1]: Stopped Docker Application Container Engine.
	I0203 12:28:41.329568   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 systemd[1]: Starting Docker Application Container Engine...
	I0203 12:28:41.329641   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:12.352956833Z" level=info msg="Starting up"
	I0203 12:28:41.329641   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:12.353893039Z" level=info msg="containerd not running, starting managed containerd"
	I0203 12:28:41.329641   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:12.356231552Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1107
	I0203 12:28:41.329705   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.387763834Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0203 12:28:41.329705   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415379693Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0203 12:28:41.329774   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415427893Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0203 12:28:41.329774   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415503993Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0203 12:28:41.329774   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415521293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.329843   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415552594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:41.329843   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415571594Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.329909   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415753695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:41.329909   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415875095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.329909   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415895996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:41.329974   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415907496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.329974   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.415998596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.329974   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.416122597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.330066   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419383016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:41.330066   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419448316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0203 12:28:41.330066   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419602317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0203 12:28:41.330140   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419703417Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0203 12:28:41.330140   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419732118Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0203 12:28:41.330140   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.419761418Z" level=info msg="metadata content store policy set" policy=shared
	I0203 12:28:41.330207   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420025019Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0203 12:28:41.330207   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420117020Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0203 12:28:41.330207   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420135220Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0203 12:28:41.330207   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420150320Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0203 12:28:41.330273   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420168320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0203 12:28:41.330273   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420220020Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0203 12:28:41.330273   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420554522Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0203 12:28:41.330345   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420715123Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0203 12:28:41.330345   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420811824Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0203 12:28:41.330414   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420833624Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0203 12:28:41.330414   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420853524Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.330414   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420879824Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.330481   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420897724Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.330481   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420912624Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.330481   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.420991825Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.330481   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421007125Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.330548   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421021725Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.330548   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421034325Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0203 12:28:41.330616   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421059025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330616   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421075725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330616   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421090525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330687   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421104726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330687   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421118126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330687   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421132126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330754   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421150126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330754   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421166226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330754   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421188326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330823   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421206126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330823   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421218626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330823   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421231326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330823   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421244126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330898   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421262126Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0203 12:28:41.330898   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421286927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330898   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421299927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.330969   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421316127Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0203 12:28:41.330969   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421657629Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0203 12:28:41.330969   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421699929Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0203 12:28:41.331046   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421719729Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0203 12:28:41.331131   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421738629Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0203 12:28:41.331152   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421749929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421767729Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.421781429Z" level=info msg="NRI interface is disabled by configuration."
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422100631Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422251132Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422392333Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:12 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:12.422418033Z" level=info msg="containerd successfully booted in 0.035603s"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.403475080Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.431623642Z" level=info msg="Loading containers: start."
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.675130644Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.788922499Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.907280980Z" level=info msg="Loading containers: done."
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.932910027Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.932994128Z" level=info msg="Daemon has completed initialization"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.970542044Z" level=info msg="API listen on /var/run/docker.sock"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:13.970691945Z" level=info msg="API listen on [::]:2376"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:13 multinode-749300 systemd[1]: Started Docker Application Container Engine.
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Start docker client with request timeout 0s"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Loaded network plugin cni"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:14Z" level=info msg="Start cri-dockerd grpc backend"
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:14 multinode-749300 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:19Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-58667487b6-zgvmd_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"efcd217a3204d8ee4b03ebb412109a32b1b008fc65b7434e2087e8fa5429c03b\""
	I0203 12:28:41.331180   13136 command_runner.go:130] > Feb 03 12:27:19 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:19Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-v2gkp_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"26e5557dc32ce42e41eb095169017d71cd452b2e90ecede8972ab6dfa8c841ac\""
	I0203 12:28:41.331746   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.731892062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.331746   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.732069764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.331746   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.732104064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331746   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.732632967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331746   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.742524924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.331859   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.742776225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.331897   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.742902026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331939   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.743145327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331939   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787449782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787596483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787637083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.787820284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818198959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818289160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818451361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:20.818555561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/264f9c1c2c05f544f10a0af503e7dfb16c8eaf7dab55a12d747c05df02b07807/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:20 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d8732fe7d2435b888ee9c1bdc8f366b2cd23fe7a47230b5e0b7e6e97547fb30e/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e2da6b5a5bd1b22ed0d0ef9ab7fd9a0874f1357443511e898b07fbae5f28d3d0/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc833a943f11f228aa4ef7daceca6bf4fd4096e22ee6354cc8afb177b0dc3db5/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.377130176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.378256483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.378462184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.378972087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.423087341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.424963652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.426916563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.427886269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.440196639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.440916544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.331971   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.442061550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332496   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.442305352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332496   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.453876818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.332496   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.454104020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.332581   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.454340021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:21 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:21.454632323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:25 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:25Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474743418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474833119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474852519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.474952220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502675379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502746480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502760180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.502846980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507587807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507657108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507682008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:26.507809209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c4912e7d3383ee7e383387115cfa625509cdb8edff08db473311607d723e4d67/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1eece224f54eb90d32ca17e53dec80b8ad8db63a733127cae7ce39832c944127/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:26 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:27:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c682ff8834bf472070d7ef8557ee1391dcfffd86e9b6a29c668eee4fe700e342/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010215801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010492502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010590603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.010742104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.013544220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.013678021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.332611   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.013710621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333142   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.014126823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333142   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145033877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.333142   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145181177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.333142   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145225278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333222   13136 command_runner.go:130] > Feb 03 12:27:27 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:27.145314878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333253   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1100]: time="2025-02-03T12:27:57.589562586Z" level=info msg="ignoring event" container=edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0203 12:28:41.333297   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:57.590947498Z" level=info msg="shim disconnected" id=edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578 namespace=moby
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:57.591492803Z" level=warning msg="cleaning up after shim disconnected" id=edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578 namespace=moby
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:27:57 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:57.591599004Z" level=info msg="cleaning up dead shim" namespace=moby
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.013597299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.013673700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.013692300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.014212603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223402731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223571532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223587232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223671032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.236644911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.237659918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.237678218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.238007320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:28:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d290c79ddbf8dbaaae0ac6ae29ff1695c351eb244341bb86dfa66bd51e407af5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:28:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ac5f0bf5197cf2f2f9c600a6d9f77ea7775ba4c80a3a3c30272ea8dc42d9f4e2/resolv.conf as [nameserver 172.25.0.1]"
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.741947665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.742072666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.742088066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.742520068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783254697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783521498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783775700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.333327   13136 command_runner.go:130] > Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783932101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0203 12:28:41.362391   13136 logs.go:123] Gathering logs for etcd [09707a862965] ...
	I0203 12:28:41.362391   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09707a862965"
	I0203 12:28:41.392746   13136 command_runner.go:130] ! {"level":"warn","ts":"2025-02-03T12:27:21.807150Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0203 12:28:41.393649   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.807376Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.25.12.244:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.25.12.244:2380","--initial-cluster=multinode-749300=https://172.25.12.244:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.25.12.244:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.25.12.244:2380","--name=multinode-749300","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0203 12:28:41.393761   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.810076Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0203 12:28:41.393780   13136 command_runner.go:130] ! {"level":"warn","ts":"2025-02-03T12:27:21.810110Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0203 12:28:41.393780   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.810121Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.25.12.244:2380"]}
	I0203 12:28:41.393780   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.810165Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0203 12:28:41.393860   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.813162Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.25.12.244:2379"]}
	I0203 12:28:41.393948   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.815738Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-749300","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.25.12.244:2380"],"listen-peer-urls":["https://172.25.12.244:2380"],"advertise-client-urls":["https://172.25.12.244:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.12.244:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-c
luster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0203 12:28:41.394013   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.836502Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"19.618913ms"}
	I0203 12:28:41.394013   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.860600Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0203 12:28:41.394075   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.876663Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"bd3b09816c9d03a4","local-member-id":"aee9b6e79987349e","commit-index":2011}
	I0203 12:28:41.394075   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.879122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e switched to configuration voters=()"}
	I0203 12:28:41.394139   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.881202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became follower at term 2"}
	I0203 12:28:41.394139   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.882322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aee9b6e79987349e [peers: [], term: 2, commit: 2011, applied: 0, lastindex: 2011, lastterm: 2]"}
	I0203 12:28:41.394139   13136 command_runner.go:130] ! {"level":"warn","ts":"2025-02-03T12:27:21.896121Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0203 12:28:41.394209   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.900153Z","caller":"mvcc/kvstore.go:346","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1395}
	I0203 12:28:41.394209   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.903670Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":1746}
	I0203 12:28:41.394271   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.910428Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0203 12:28:41.394271   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.919884Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"aee9b6e79987349e","timeout":"7s"}
	I0203 12:28:41.394335   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.920678Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"aee9b6e79987349e"}
	I0203 12:28:41.394335   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.922572Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"aee9b6e79987349e","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	I0203 12:28:41.394335   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.923543Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	I0203 12:28:41.394404   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924198Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0203 12:28:41.394404   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924288Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0203 12:28:41.394466   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924338Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0203 12:28:41.394466   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.924675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e switched to configuration voters=(12603806138002519198)"}
	I0203 12:28:41.394535   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.925111Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bd3b09816c9d03a4","local-member-id":"aee9b6e79987349e","added-peer-id":"aee9b6e79987349e","added-peer-peer-urls":["https://172.25.1.53:2380"]}
	I0203 12:28:41.394535   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.926083Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bd3b09816c9d03a4","local-member-id":"aee9b6e79987349e","cluster-version":"3.5"}
	I0203 12:28:41.394600   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.926140Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0203 12:28:41.394600   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.926075Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0203 12:28:41.394664   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.931282Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0203 12:28:41.394664   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.932289Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.25.12.244:2380"}
	I0203 12:28:41.394664   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.932461Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.25.12.244:2380"}
	I0203 12:28:41.394761   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.932990Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aee9b6e79987349e","initial-advertise-peer-urls":["https://172.25.12.244:2380"],"listen-peer-urls":["https://172.25.12.244:2380"],"advertise-client-urls":["https://172.25.12.244:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.12.244:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0203 12:28:41.394761   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:21.933175Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0203 12:28:41.394827   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e is starting a new election at term 2"}
	I0203 12:28:41.394827   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became pre-candidate at term 2"}
	I0203 12:28:41.394891   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e received MsgPreVoteResp from aee9b6e79987349e at term 2"}
	I0203 12:28:41.394891   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became candidate at term 3"}
	I0203 12:28:41.394891   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e received MsgVoteResp from aee9b6e79987349e at term 3"}
	I0203 12:28:41.394960   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became leader at term 3"}
	I0203 12:28:41.394960   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.283999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aee9b6e79987349e elected leader aee9b6e79987349e at term 3"}
	I0203 12:28:41.395023   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.298589Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aee9b6e79987349e","local-member-attributes":"{Name:multinode-749300 ClientURLs:[https://172.25.12.244:2379]}","request-path":"/0/members/aee9b6e79987349e/attributes","cluster-id":"bd3b09816c9d03a4","publish-timeout":"7s"}
	I0203 12:28:41.395023   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.298815Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0203 12:28:41.395086   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.299061Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0203 12:28:41.395086   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.301663Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0203 12:28:41.395086   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.301847Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0203 12:28:41.395156   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.306842Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0203 12:28:41.395156   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.310094Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0203 12:28:41.395156   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.312993Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0203 12:28:41.395218   13136 command_runner.go:130] ! {"level":"info","ts":"2025-02-03T12:27:23.319087Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.12.244:2379"}
	I0203 12:28:41.405384   13136 logs.go:123] Gathering logs for coredns [fe91a8d012ae] ...
	I0203 12:28:41.405384   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe91a8d012ae"
	I0203 12:28:41.434666   13136 command_runner.go:130] > .:53
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3e8130cfa8e96169e54fdb81903f9b4680c96074b93281de316a617894d613269c265db78cbf1be00f04df6f27627d689838921ad115c7f1fadc26b632a43f17
	I0203 12:28:41.434666   13136 command_runner.go:130] > CoreDNS-1.11.3
	I0203 12:28:41.434666   13136 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 127.0.0.1:49376 - 54533 "HINFO IN 5545318737342419956.4498205497283969299. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.271697251s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:43143 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000594006s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:44943 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.183348242s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:36646 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.156236585s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:58135 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.085964402s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.0.3:55647 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000429704s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.0.3:43653 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000173402s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.0.3:39125 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000093801s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.0.3:43285 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000234602s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:49861 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157602s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:59079 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024886436s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:56014 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155402s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:49501 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115101s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:59809 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.029540479s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:45190 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184901s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:58561 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000207002s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.1.2:54547 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108101s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.0.3:52767 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140901s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.0.3:48199 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000275502s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.0.3:40769 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194202s
	I0203 12:28:41.434666   13136 command_runner.go:130] > [INFO] 10.244.0.3:56613 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000241303s
	I0203 12:28:41.435194   13136 command_runner.go:130] > [INFO] 10.244.0.3:36390 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000127501s
	I0203 12:28:41.435194   13136 command_runner.go:130] > [INFO] 10.244.0.3:49253 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150501s
	I0203 12:28:41.435194   13136 command_runner.go:130] > [INFO] 10.244.0.3:53291 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115601s
	I0203 12:28:41.435194   13136 command_runner.go:130] > [INFO] 10.244.0.3:37098 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000782s
	I0203 12:28:41.435194   13136 command_runner.go:130] > [INFO] 10.244.1.2:47927 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154002s
	I0203 12:28:41.435194   13136 command_runner.go:130] > [INFO] 10.244.1.2:49855 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156202s
	I0203 12:28:41.435300   13136 command_runner.go:130] > [INFO] 10.244.1.2:51176 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114201s
	I0203 12:28:41.435300   13136 command_runner.go:130] > [INFO] 10.244.1.2:45626 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156701s
	I0203 12:28:41.435300   13136 command_runner.go:130] > [INFO] 10.244.0.3:33142 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141402s
	I0203 12:28:41.435300   13136 command_runner.go:130] > [INFO] 10.244.0.3:36637 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000249602s
	I0203 12:28:41.435300   13136 command_runner.go:130] > [INFO] 10.244.0.3:34293 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135301s
	I0203 12:28:41.435387   13136 command_runner.go:130] > [INFO] 10.244.0.3:59245 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112701s
	I0203 12:28:41.435387   13136 command_runner.go:130] > [INFO] 10.244.1.2:56139 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200702s
	I0203 12:28:41.435387   13136 command_runner.go:130] > [INFO] 10.244.1.2:53567 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131301s
	I0203 12:28:41.435387   13136 command_runner.go:130] > [INFO] 10.244.1.2:55778 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000182502s
	I0203 12:28:41.435387   13136 command_runner.go:130] > [INFO] 10.244.1.2:53486 - 5 "PTR IN 1.0.25.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000163702s
	I0203 12:28:41.435479   13136 command_runner.go:130] > [INFO] 10.244.0.3:52745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191702s
	I0203 12:28:41.435479   13136 command_runner.go:130] > [INFO] 10.244.0.3:38587 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132301s
	I0203 12:28:41.435479   13136 command_runner.go:130] > [INFO] 10.244.0.3:53685 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078101s
	I0203 12:28:41.435479   13136 command_runner.go:130] > [INFO] 10.244.0.3:38406 - 5 "PTR IN 1.0.25.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000076301s
	I0203 12:28:41.435479   13136 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0203 12:28:41.435479   13136 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0203 12:28:41.438438   13136 logs.go:123] Gathering logs for kube-controller-manager [8ade10c0fb09] ...
	I0203 12:28:41.438517   13136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ade10c0fb09"
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:50.328199       1 serving.go:386] Generated self-signed cert in-memory
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:50.683234       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:50.683563       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:50.687907       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:50.687950       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:50.687972       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:50.687984       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:55.071680       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:55.072106       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:55.089226       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:55.089889       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:55.091177       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:55.113934       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:55.114137       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:55.114294       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:55.115111       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0203 12:28:41.469945   13136 command_runner.go:130] ! I0203 12:04:55.143403       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0203 12:28:41.470481   13136 command_runner.go:130] ! I0203 12:04:55.146241       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0203 12:28:41.470481   13136 command_runner.go:130] ! I0203 12:04:55.146450       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0203 12:28:41.470481   13136 command_runner.go:130] ! I0203 12:04:55.167456       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0203 12:28:41.470535   13136 command_runner.go:130] ! I0203 12:04:55.168207       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0203 12:28:41.470535   13136 command_runner.go:130] ! I0203 12:04:55.169697       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0203 12:28:41.470535   13136 command_runner.go:130] ! I0203 12:04:55.170035       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0203 12:28:41.470535   13136 command_runner.go:130] ! I0203 12:04:55.172429       1 shared_informer.go:320] Caches are synced for tokens
	I0203 12:28:41.470535   13136 command_runner.go:130] ! W0203 12:04:55.207419       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0203 12:28:41.470535   13136 command_runner.go:130] ! I0203 12:04:55.220184       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0203 12:28:41.470535   13136 command_runner.go:130] ! I0203 12:04:55.220335       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0203 12:28:41.471067   13136 command_runner.go:130] ! I0203 12:04:55.220802       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0203 12:28:41.471067   13136 command_runner.go:130] ! I0203 12:04:55.220818       1 shared_informer.go:313] Waiting for caches to sync for node
	I0203 12:28:41.471067   13136 command_runner.go:130] ! I0203 12:04:55.236689       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0203 12:28:41.471067   13136 command_runner.go:130] ! I0203 12:04:55.236985       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0203 12:28:41.471290   13136 command_runner.go:130] ! I0203 12:04:55.237003       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0203 12:28:41.471290   13136 command_runner.go:130] ! I0203 12:04:55.260414       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0203 12:28:41.471358   13136 command_runner.go:130] ! I0203 12:04:55.260996       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0203 12:28:41.471358   13136 command_runner.go:130] ! I0203 12:04:55.261428       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0203 12:28:41.471358   13136 command_runner.go:130] ! I0203 12:04:55.289640       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0203 12:28:41.471358   13136 command_runner.go:130] ! I0203 12:04:55.289893       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0203 12:28:41.471358   13136 command_runner.go:130] ! I0203 12:04:55.290571       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0203 12:28:41.471358   13136 command_runner.go:130] ! I0203 12:04:55.290736       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0203 12:28:41.471358   13136 command_runner.go:130] ! I0203 12:04:55.314846       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0203 12:28:41.471358   13136 command_runner.go:130] ! I0203 12:04:55.315076       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0203 12:28:41.471358   13136 command_runner.go:130] ! I0203 12:04:55.315101       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0203 12:28:41.471358   13136 command_runner.go:130] ! I0203 12:04:55.319462       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0203 12:28:41.471901   13136 command_runner.go:130] ! I0203 12:04:55.319527       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0203 12:28:41.471901   13136 command_runner.go:130] ! I0203 12:04:55.319535       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0203 12:28:41.471901   13136 command_runner.go:130] ! I0203 12:04:55.319689       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0203 12:28:41.471965   13136 command_runner.go:130] ! I0203 12:04:55.319723       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0203 12:28:41.471965   13136 command_runner.go:130] ! I0203 12:04:55.319733       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0203 12:28:41.471965   13136 command_runner.go:130] ! I0203 12:04:55.446823       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0203 12:28:41.471965   13136 command_runner.go:130] ! I0203 12:04:55.446851       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0203 12:28:41.472032   13136 command_runner.go:130] ! I0203 12:04:55.446960       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0203 12:28:41.472032   13136 command_runner.go:130] ! I0203 12:04:55.446972       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0203 12:28:41.472032   13136 command_runner.go:130] ! I0203 12:04:55.579930       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0203 12:28:41.472032   13136 command_runner.go:130] ! I0203 12:04:55.580047       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0203 12:28:41.472096   13136 command_runner.go:130] ! I0203 12:04:55.580079       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0203 12:28:41.472096   13136 command_runner.go:130] ! I0203 12:04:55.730127       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0203 12:28:41.472096   13136 command_runner.go:130] ! I0203 12:04:55.730301       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0203 12:28:41.472096   13136 command_runner.go:130] ! I0203 12:04:55.730314       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0203 12:28:41.472156   13136 command_runner.go:130] ! I0203 12:04:55.889482       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0203 12:28:41.472156   13136 command_runner.go:130] ! I0203 12:04:55.889843       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0203 12:28:41.472156   13136 command_runner.go:130] ! I0203 12:04:55.889907       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0203 12:28:41.472156   13136 command_runner.go:130] ! I0203 12:04:56.030244       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0203 12:28:41.472156   13136 command_runner.go:130] ! I0203 12:04:56.030535       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0203 12:28:41.472225   13136 command_runner.go:130] ! I0203 12:04:56.030566       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.182222       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.183153       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.183191       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.226256       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.226280       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.226330       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.226371       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.226410       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.382971       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.383201       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.383218       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.687449       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.687532       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.687548       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.832606       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.832640       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.832542       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.984351       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.984538       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:56.984550       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:57.130440       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:57.131375       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:57.131428       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:57.284265       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:57.284330       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:57.284343       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:57.431498       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:57.431577       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:57.432308       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:57.580329       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0203 12:28:41.476319   13136 command_runner.go:130] ! I0203 12:04:57.580661       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0203 12:28:41.476881   13136 command_runner.go:130] ! I0203 12:04:57.580693       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:57.730504       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:57.730629       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:57.730638       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:57.730646       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:57.730719       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:57.730820       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:57.880536       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:57.880666       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:57.881079       1 shared_informer.go:313] Waiting for caches to sync for job
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.186601       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.186797       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187086       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! W0203 12:04:58.187187       1 shared_informer.go:597] resyncPeriod 18h8m42.862796871s is smaller than resyncCheckPeriod 21h1m9.302357504s and the informer has already started. Changing it to 21h1m9.302357504s
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187252       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187334       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187356       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187374       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187391       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187427       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187455       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! W0203 12:04:58.187474       1 shared_informer.go:597] resyncPeriod 19h41m25.464232572s is smaller than resyncCheckPeriod 21h1m9.302357504s and the informer has already started. Changing it to 21h1m9.302357504s
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187523       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187588       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187662       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187679       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187699       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.187967       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0203 12:28:41.476915   13136 command_runner.go:130] ! I0203 12:04:58.188030       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0203 12:28:41.477472   13136 command_runner.go:130] ! I0203 12:04:58.188069       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0203 12:28:41.477509   13136 command_runner.go:130] ! I0203 12:04:58.188097       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.188127       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.188181       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.188248       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.188271       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.188294       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.434011       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.434132       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.434145       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.476316       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.478848       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.478330       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.478362       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.478346       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.479085       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.478432       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.479097       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.478442       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.478482       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.479316       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.478490       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.478533       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.630437       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.630476       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.630884       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.630985       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.825850       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:58.826005       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:59.025218       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:59.025576       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:59.025879       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:59.026140       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:59.076054       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0203 12:28:41.477572   13136 command_runner.go:130] ! I0203 12:04:59.076201       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.229685       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.229852       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.384463       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.384562       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.384584       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.384709       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.384734       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.531643       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.535171       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.535208       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.555530       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.582679       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300\" does not exist"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.593117       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.615597       1 shared_informer.go:320] Caches are synced for expand
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.619951       1 shared_informer.go:320] Caches are synced for taint
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.620233       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.621144       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.621999       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.620965       1 shared_informer.go:320] Caches are synced for node
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.622115       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.622196       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.622213       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.622220       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.627214       1 shared_informer.go:320] Caches are synced for disruption
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.627299       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.627517       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.630821       1 shared_informer.go:320] Caches are synced for persistent volume
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.631018       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.631607       1 shared_informer.go:320] Caches are synced for PV protection
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.632152       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.632358       1 shared_informer.go:320] Caches are synced for service account
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.632692       1 shared_informer.go:320] Caches are synced for cronjob
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.632840       1 shared_informer.go:320] Caches are synced for TTL
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.634133       1 shared_informer.go:320] Caches are synced for GC
	I0203 12:28:41.478141   13136 command_runner.go:130] ! I0203 12:04:59.634183       1 shared_informer.go:320] Caches are synced for namespace
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.637337       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.637530       1 shared_informer.go:320] Caches are synced for crt configmap
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.644447       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300" podCIDRs=["10.244.0.0/24"]
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.644496       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.644518       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.647453       1 shared_informer.go:320] Caches are synced for deployment
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.647468       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.661087       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.662500       1 shared_informer.go:320] Caches are synced for ephemeral
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.679063       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.679241       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.679489       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.679271       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.680515       1 shared_informer.go:320] Caches are synced for daemon sets
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.680894       1 shared_informer.go:320] Caches are synced for stateful set
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.682157       1 shared_informer.go:320] Caches are synced for job
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.686733       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.688328       1 shared_informer.go:320] Caches are synced for HPA
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.688383       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.697934       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.698063       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.688399       1 shared_informer.go:320] Caches are synced for PVC protection
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.688409       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.688419       1 shared_informer.go:320] Caches are synced for attach detach
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.688482       1 shared_informer.go:320] Caches are synced for resource quota
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.697636       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.697649       1 shared_informer.go:320] Caches are synced for endpoint
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.714625       1 shared_informer.go:320] Caches are synced for garbage collector
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.714677       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:04:59.714688       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:05:00.046777       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:05:00.818009       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="311.273381ms"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:05:00.848718       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="30.361418ms"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:05:00.848801       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.501µs"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:05:01.040466       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="91.174094ms"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:05:01.060761       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.181113ms"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:05:01.062232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="51.701µs"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:05:21.819966       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.478690   13136 command_runner.go:130] ! I0203 12:05:21.843034       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.479281   13136 command_runner.go:130] ! I0203 12:05:21.853094       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="295.503µs"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:05:21.889706       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="83.9µs"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:05:23.170298       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="56.1µs"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:05:24.187762       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="23.236374ms"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:05:24.188513       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="90.9µs"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:05:24.626780       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:05:26.205271       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:07:57.197252       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m02\" does not exist"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:07:57.213772       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300-m02" podCIDRs=["10.244.1.0/24"]
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:07:57.214096       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:07:57.214387       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:07:57.243166       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:07:57.578919       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:07:58.163164       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:07:59.655130       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:07:59.772999       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:07.534314       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:26.797682       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:26.797755       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:26.813836       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:28.192212       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:29.680135       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:30.702586       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:51.029918       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="72.629315ms"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:51.048475       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="16.732326ms"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:51.049169       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="396.601µs"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:51.058159       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="35.9µs"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:51.069790       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="40.1µs"
	I0203 12:28:41.479341   13136 command_runner.go:130] ! I0203 12:08:53.787260       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="12.580521ms"
	I0203 12:28:41.479889   13136 command_runner.go:130] ! I0203 12:08:53.787659       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="70.201µs"
	I0203 12:28:41.479924   13136 command_runner.go:130] ! I0203 12:08:53.939078       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="12.55302ms"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:08:53.939506       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="33.801µs"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:08:58.516195       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:09:01.710207       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:30.158978       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m03\" does not exist"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:30.160493       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:30.187436       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300-m03" podCIDRs=["10.244.2.0/24"]
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:30.187486       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:30.187520       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:30.195215       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:30.643712       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:31.194036       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:34.733168       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:34.818129       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:40.541982       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:59.598308       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:59.598384       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:59.613509       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:12:59.761059       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:13:01.072377       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:13:02.975699       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:16:00.817386       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:17:16.830447       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:18:09.728117       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:20:44.872410       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:20:44.874163       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:20:44.902212       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:20:50.011997       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:21:07.487830       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:22:48.017949       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.479989   13136 command_runner.go:130] ! I0203 12:22:48.044428       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480601   13136 command_runner.go:130] ! I0203 12:22:52.915959       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:41.480634   13136 command_runner.go:130] ! I0203 12:22:58.370520       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:22:58.373994       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m03\" does not exist"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:22:58.409838       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300-m03" podCIDRs=["10.244.3.0/24"]
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:22:58.410167       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! E0203 12:22:58.438530       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-749300-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-749300-m03" podCIDRs=["10.244.4.0/24"]
	I0203 12:28:41.480696   13136 command_runner.go:130] ! E0203 12:22:58.438947       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-749300-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! E0203 12:22:58.439229       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-749300-m03': failed to patch node CIDR: Node \"multinode-749300-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:22:58.439401       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:22:58.444440       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:22:58.960922       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:22:59.994381       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:23:08.704715       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:23:13.216732       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:23:13.218582       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:23:13.233034       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:23:14.968424       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:23:15.606788       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:24:50.048901       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:24:50.049506       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:24:50.231618       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.480696   13136 command_runner.go:130] ! I0203 12:24:55.449570       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:28:41.502967   13136 logs.go:123] Gathering logs for container status ...
	I0203 12:28:41.502967   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 12:28:41.569088   13136 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0203 12:28:41.569088   13136 command_runner.go:130] > edb5f00f10420       c69fa2e9cbf5f                                                                                         11 seconds ago       Running             coredns                   1                   ac5f0bf5197cf       coredns-668d6bf9bc-v2gkp
	I0203 12:28:41.569088   13136 command_runner.go:130] > 0ff3e07f2982f       8c811b4aec35f                                                                                         11 seconds ago       Running             busybox                   1                   d290c79ddbf8d       busybox-58667487b6-zgvmd
	I0203 12:28:41.569088   13136 command_runner.go:130] > 7cbc7a552a4c3       6e38f40d628db                                                                                         31 seconds ago       Running             storage-provisioner       2                   1eece224f54eb       storage-provisioner
	I0203 12:28:41.569088   13136 command_runner.go:130] > 644890f5738e5       d300845f67aeb                                                                                         About a minute ago   Running             kindnet-cni               1                   c682ff8834bf4       kindnet-h6m57
	I0203 12:28:41.569088   13136 command_runner.go:130] > edf3d4284acbb       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   1eece224f54eb       storage-provisioner
	I0203 12:28:41.569088   13136 command_runner.go:130] > cf33452e72443       e29f9c7391fd9                                                                                         About a minute ago   Running             kube-proxy                1                   c4912e7d3383e       kube-proxy-9g92t
	I0203 12:28:41.569088   13136 command_runner.go:130] > 09707a8629658       a9e7e6b294baf                                                                                         About a minute ago   Running             etcd                      0                   fc833a943f11f       etcd-multinode-749300
	I0203 12:28:41.569088   13136 command_runner.go:130] > 2e43c2ecb4a92       2b0d6572d062c                                                                                         About a minute ago   Running             kube-scheduler            1                   e2da6b5a5bd1b       kube-scheduler-multinode-749300
	I0203 12:28:41.569088   13136 command_runner.go:130] > fa5ab1df89857       019ee182b58e2                                                                                         About a minute ago   Running             kube-controller-manager   1                   d8732fe7d2435       kube-controller-manager-multinode-749300
	I0203 12:28:41.569088   13136 command_runner.go:130] > 6c19e0a0ba9c0       95c0bda56fc4d                                                                                         About a minute ago   Running             kube-apiserver            0                   264f9c1c2c05f       kube-apiserver-multinode-749300
	I0203 12:28:41.569088   13136 command_runner.go:130] > f42690726d50f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   efcd217a3204d       busybox-58667487b6-zgvmd
	I0203 12:28:41.569088   13136 command_runner.go:130] > fe91a8d012aee       c69fa2e9cbf5f                                                                                         23 minutes ago       Exited              coredns                   0                   26e5557dc32ce       coredns-668d6bf9bc-v2gkp
	I0203 12:28:41.569088   13136 command_runner.go:130] > fab2d9be6b5c7       kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26              23 minutes ago       Exited              kindnet-cni               0                   cb49b32ba0852       kindnet-h6m57
	I0203 12:28:41.569088   13136 command_runner.go:130] > c6dc514e98f69       e29f9c7391fd9                                                                                         23 minutes ago       Exited              kube-proxy                0                   1ff01fa7d8c67       kube-proxy-9g92t
	I0203 12:28:41.569088   13136 command_runner.go:130] > 8ade10c0fb096       019ee182b58e2                                                                                         23 minutes ago       Exited              kube-controller-manager   0                   b1b473818438d       kube-controller-manager-multinode-749300
	I0203 12:28:41.569088   13136 command_runner.go:130] > 88c40ca9aa3cb       2b0d6572d062c                                                                                         23 minutes ago       Exited              kube-scheduler            0                   d8d9e598659ff       kube-scheduler-multinode-749300
	I0203 12:28:44.072206   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods
	I0203 12:28:44.072206   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:44.072206   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:44.072206   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:44.078329   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:44.078329   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:44.078329   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:44.078329   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:44 GMT
	I0203 12:28:44.078329   13136 round_trippers.go:580]     Audit-Id: a5ed77d1-f712-4996-9675-6c8567838a53
	I0203 12:28:44.078329   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:44.078329   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:44.078329   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:44.079663   13136 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1975"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1962","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90284 chars]
	I0203 12:28:44.082964   13136 system_pods.go:59] 12 kube-system pods found
	I0203 12:28:44.082964   13136 system_pods.go:61] "coredns-668d6bf9bc-v2gkp" [c94a77a3-456e-41d7-b9ad-7aa97e0264a7] Running
	I0203 12:28:44.082964   13136 system_pods.go:61] "etcd-multinode-749300" [a956084b-f454-4ef5-8fed-7c189cb74ab0] Running
	I0203 12:28:44.083492   13136 system_pods.go:61] "kindnet-bckxx" [006a41d1-55d5-479a-856f-5670f4ae6588] Running
	I0203 12:28:44.083492   13136 system_pods.go:61] "kindnet-dc9wq" [debecd3d-64fd-46e8-8d28-ca97e75cfcfe] Running
	I0203 12:28:44.083492   13136 system_pods.go:61] "kindnet-h6m57" [67c155d5-fb9b-42f5-8e64-865c44a5d4e6] Running
	I0203 12:28:44.083492   13136 system_pods.go:61] "kube-apiserver-multinode-749300" [72513861-07f4-4533-8f55-8b3cce215b4c] Running
	I0203 12:28:44.083492   13136 system_pods.go:61] "kube-controller-manager-multinode-749300" [63c0818c-a0e6-40d1-b0c4-1cd633c91afb] Running
	I0203 12:28:44.083492   13136 system_pods.go:61] "kube-proxy-9g92t" [1709b874-4fee-41f5-8d30-24912b2fa725] Running
	I0203 12:28:44.083492   13136 system_pods.go:61] "kube-proxy-ggnq7" [63bc9e77-90e3-40c5-9b49-e95d2bfd7426] Running
	I0203 12:28:44.083492   13136 system_pods.go:61] "kube-proxy-w8wrd" [f81878fa-528f-4bdf-90ec-83f54166370e] Running
	I0203 12:28:44.083492   13136 system_pods.go:61] "kube-scheduler-multinode-749300" [8e4c1052-9dca-466d-833b-eff318b977d7] Running
	I0203 12:28:44.083492   13136 system_pods.go:61] "storage-provisioner" [4c991afa-7bb0-4d52-bded-22d68037b5ae] Running
	I0203 12:28:44.083492   13136 system_pods.go:74] duration metric: took 3.7223887s to wait for pod list to return data ...
	I0203 12:28:44.083598   13136 default_sa.go:34] waiting for default service account to be created ...
	I0203 12:28:44.083667   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/default/serviceaccounts
	I0203 12:28:44.083667   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:44.083667   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:44.083667   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:44.089235   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:28:44.089235   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:44.089235   13136 round_trippers.go:580]     Content-Length: 262
	I0203 12:28:44.089235   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:44 GMT
	I0203 12:28:44.089235   13136 round_trippers.go:580]     Audit-Id: e99cee76-01fc-4f73-ba57-8c596bdb4e65
	I0203 12:28:44.089235   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:44.089235   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:44.089235   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:44.089235   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:44.089235   13136 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1975"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6fd4ae1e-3802-4893-86a4-85da162d717d","resourceVersion":"329","creationTimestamp":"2025-02-03T12:04:59Z"}}]}
	I0203 12:28:44.089783   13136 default_sa.go:45] found service account: "default"
	I0203 12:28:44.089783   13136 default_sa.go:55] duration metric: took 6.1846ms for default service account to be created ...
	I0203 12:28:44.089783   13136 system_pods.go:116] waiting for k8s-apps to be running ...
	I0203 12:28:44.089919   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods
	I0203 12:28:44.089967   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:44.089967   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:44.089967   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:44.093810   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:44.093810   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:44.093810   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:44.093810   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:44 GMT
	I0203 12:28:44.093810   13136 round_trippers.go:580]     Audit-Id: cae25a14-0418-4eb6-b37d-108d48bbba9e
	I0203 12:28:44.093810   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:44.093810   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:44.093810   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:44.094921   13136 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1975"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1962","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90284 chars]
	I0203 12:28:44.098764   13136 system_pods.go:86] 12 kube-system pods found
	I0203 12:28:44.098764   13136 system_pods.go:89] "coredns-668d6bf9bc-v2gkp" [c94a77a3-456e-41d7-b9ad-7aa97e0264a7] Running
	I0203 12:28:44.098764   13136 system_pods.go:89] "etcd-multinode-749300" [a956084b-f454-4ef5-8fed-7c189cb74ab0] Running
	I0203 12:28:44.098764   13136 system_pods.go:89] "kindnet-bckxx" [006a41d1-55d5-479a-856f-5670f4ae6588] Running
	I0203 12:28:44.098764   13136 system_pods.go:89] "kindnet-dc9wq" [debecd3d-64fd-46e8-8d28-ca97e75cfcfe] Running
	I0203 12:28:44.098764   13136 system_pods.go:89] "kindnet-h6m57" [67c155d5-fb9b-42f5-8e64-865c44a5d4e6] Running
	I0203 12:28:44.098764   13136 system_pods.go:89] "kube-apiserver-multinode-749300" [72513861-07f4-4533-8f55-8b3cce215b4c] Running
	I0203 12:28:44.098764   13136 system_pods.go:89] "kube-controller-manager-multinode-749300" [63c0818c-a0e6-40d1-b0c4-1cd633c91afb] Running
	I0203 12:28:44.098764   13136 system_pods.go:89] "kube-proxy-9g92t" [1709b874-4fee-41f5-8d30-24912b2fa725] Running
	I0203 12:28:44.098764   13136 system_pods.go:89] "kube-proxy-ggnq7" [63bc9e77-90e3-40c5-9b49-e95d2bfd7426] Running
	I0203 12:28:44.098764   13136 system_pods.go:89] "kube-proxy-w8wrd" [f81878fa-528f-4bdf-90ec-83f54166370e] Running
	I0203 12:28:44.098764   13136 system_pods.go:89] "kube-scheduler-multinode-749300" [8e4c1052-9dca-466d-833b-eff318b977d7] Running
	I0203 12:28:44.098764   13136 system_pods.go:89] "storage-provisioner" [4c991afa-7bb0-4d52-bded-22d68037b5ae] Running
	I0203 12:28:44.098764   13136 system_pods.go:126] duration metric: took 8.9813ms to wait for k8s-apps to be running ...
	I0203 12:28:44.099360   13136 system_svc.go:44] waiting for kubelet service to be running ....
	I0203 12:28:44.106204   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 12:28:44.134085   13136 system_svc.go:56] duration metric: took 33.9378ms WaitForService to wait for kubelet
	I0203 12:28:44.134085   13136 kubeadm.go:582] duration metric: took 1m13.9269875s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 12:28:44.134085   13136 node_conditions.go:102] verifying NodePressure condition ...
	I0203 12:28:44.134200   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes
	I0203 12:28:44.134305   13136 round_trippers.go:469] Request Headers:
	I0203 12:28:44.134305   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:28:44.134305   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:28:44.137558   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:28:44.137771   13136 round_trippers.go:577] Response Headers:
	I0203 12:28:44.137771   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:28:44.137771   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:28:44 GMT
	I0203 12:28:44.137771   13136 round_trippers.go:580]     Audit-Id: f9be6b6b-a640-40be-9a48-ed837033e5aa
	I0203 12:28:44.137771   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:28:44.137771   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:28:44.137771   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:28:44.138232   13136 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1975"},"items":[{"metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16254 chars]
	I0203 12:28:44.139139   13136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 12:28:44.139238   13136 node_conditions.go:123] node cpu capacity is 2
	I0203 12:28:44.139238   13136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 12:28:44.139238   13136 node_conditions.go:123] node cpu capacity is 2
	I0203 12:28:44.139238   13136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 12:28:44.139238   13136 node_conditions.go:123] node cpu capacity is 2
	I0203 12:28:44.139238   13136 node_conditions.go:105] duration metric: took 5.1531ms to run NodePressure ...
	I0203 12:28:44.139238   13136 start.go:241] waiting for startup goroutines ...
	I0203 12:28:44.139238   13136 start.go:246] waiting for cluster config update ...
	I0203 12:28:44.139341   13136 start.go:255] writing updated cluster config ...
	I0203 12:28:44.143571   13136 out.go:201] 
	I0203 12:28:44.145942   13136 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:28:44.160345   13136 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:28:44.161389   13136 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\config.json ...
	I0203 12:28:44.166560   13136 out.go:177] * Starting "multinode-749300-m02" worker node in "multinode-749300" cluster
	I0203 12:28:44.168685   13136 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 12:28:44.169315   13136 cache.go:56] Caching tarball of preloaded images
	I0203 12:28:44.169629   13136 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 12:28:44.169829   13136 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0203 12:28:44.169994   13136 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\config.json ...
	I0203 12:28:44.171870   13136 start.go:360] acquireMachinesLock for multinode-749300-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 12:28:44.172014   13136 start.go:364] duration metric: took 144µs to acquireMachinesLock for "multinode-749300-m02"
	I0203 12:28:44.172172   13136 start.go:96] Skipping create...Using existing machine configuration
	I0203 12:28:44.172172   13136 fix.go:54] fixHost starting: m02
	I0203 12:28:44.172637   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:28:46.223651   13136 main.go:141] libmachine: [stdout =====>] : Off
	
	I0203 12:28:46.223651   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:28:46.223651   13136 fix.go:112] recreateIfNeeded on multinode-749300-m02: state=Stopped err=<nil>
	W0203 12:28:46.223761   13136 fix.go:138] unexpected machine state, will restart: <nil>
	I0203 12:28:46.227745   13136 out.go:177] * Restarting existing hyperv VM for "multinode-749300-m02" ...
	I0203 12:28:46.229652   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-749300-m02
	I0203 12:28:49.145103   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:28:49.145103   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:28:49.145103   13136 main.go:141] libmachine: Waiting for host to start...
	I0203 12:28:49.145183   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:28:51.223125   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:28:51.223125   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:28:51.223125   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:28:53.527445   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:28:53.527445   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:28:54.527815   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:28:56.555048   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:28:56.555901   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:28:56.555901   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:28:58.853976   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:28:58.854723   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:28:59.855621   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:01.864711   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:01.864711   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:01.864711   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:04.172300   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:29:04.172300   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:05.172762   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:07.213673   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:07.214760   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:07.214965   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:09.538187   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:29:09.538187   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:10.539340   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:12.563875   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:12.564439   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:12.564439   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:14.994260   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:29:14.995107   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:14.997356   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:16.989482   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:16.989482   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:16.989482   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:19.339369   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:29:19.339599   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:19.339884   13136 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\config.json ...
	I0203 12:29:19.342720   13136 machine.go:93] provisionDockerMachine start ...
	I0203 12:29:19.342866   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:21.347660   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:21.347660   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:21.347660   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:23.680807   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:29:23.681720   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:23.685734   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:29:23.685734   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.83 22 <nil> <nil>}
	I0203 12:29:23.686260   13136 main.go:141] libmachine: About to run SSH command:
	hostname
	I0203 12:29:23.829909   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0203 12:29:23.829994   13136 buildroot.go:166] provisioning hostname "multinode-749300-m02"
	I0203 12:29:23.830070   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:25.785508   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:25.785508   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:25.785879   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:28.126150   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:29:28.126150   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:28.132396   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:29:28.133188   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.83 22 <nil> <nil>}
	I0203 12:29:28.133188   13136 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-749300-m02 && echo "multinode-749300-m02" | sudo tee /etc/hostname
	I0203 12:29:28.297595   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-749300-m02
	
	I0203 12:29:28.297595   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:30.260773   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:30.260773   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:30.260773   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:32.645244   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:29:32.645244   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:32.649090   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:29:32.649552   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.83 22 <nil> <nil>}
	I0203 12:29:32.649552   13136 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-749300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-749300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-749300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 12:29:32.803164   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 12:29:32.803164   13136 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0203 12:29:32.803230   13136 buildroot.go:174] setting up certificates
	I0203 12:29:32.803267   13136 provision.go:84] configureAuth start
	I0203 12:29:32.803267   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:34.754644   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:34.754644   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:34.754723   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:37.106839   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:29:37.106909   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:37.106983   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:39.083419   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:39.083419   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:39.084477   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:41.455774   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:29:41.455774   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:41.455774   13136 provision.go:143] copyHostCerts
	I0203 12:29:41.456252   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0203 12:29:41.456675   13136 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0203 12:29:41.456675   13136 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0203 12:29:41.457126   13136 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0203 12:29:41.458079   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0203 12:29:41.458239   13136 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0203 12:29:41.458312   13136 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0203 12:29:41.458649   13136 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0203 12:29:41.459471   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0203 12:29:41.459636   13136 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0203 12:29:41.459721   13136 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0203 12:29:41.460016   13136 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0203 12:29:41.461120   13136 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-749300-m02 san=[127.0.0.1 172.25.12.83 localhost minikube multinode-749300-m02]
	I0203 12:29:41.668515   13136 provision.go:177] copyRemoteCerts
	I0203 12:29:41.676417   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 12:29:41.676511   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:43.644792   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:43.644792   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:43.644792   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:46.016285   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:29:46.016960   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:46.016960   13136 sshutil.go:53] new ssh client: &{IP:172.25.12.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02\id_rsa Username:docker}
	I0203 12:29:46.133381   13136 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4567128s)
	I0203 12:29:46.133443   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0203 12:29:46.133869   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0203 12:29:46.182115   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0203 12:29:46.182538   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0203 12:29:46.227001   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0203 12:29:46.227001   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0203 12:29:46.271989   13136 provision.go:87] duration metric: took 13.4685705s to configureAuth
	I0203 12:29:46.271989   13136 buildroot.go:189] setting minikube options for container-runtime
	I0203 12:29:46.273002   13136 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:29:46.273002   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:48.307360   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:48.307437   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:48.307512   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:50.679048   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:29:50.679048   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:50.682452   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:29:50.683154   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.83 22 <nil> <nil>}
	I0203 12:29:50.683154   13136 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 12:29:50.822839   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0203 12:29:50.822839   13136 buildroot.go:70] root file system type: tmpfs
	I0203 12:29:50.822839   13136 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 12:29:50.822839   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:52.815521   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:52.815521   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:52.816022   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:55.160298   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:29:55.160697   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:55.165242   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:29:55.165965   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.83 22 <nil> <nil>}
	I0203 12:29:55.165965   13136 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.12.244"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 12:29:55.339851   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.12.244
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 12:29:55.339983   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:29:57.311598   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:29:57.311849   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:57.312103   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:29:59.665042   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:29:59.665042   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:29:59.669254   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:29:59.669977   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.83 22 <nil> <nil>}
	I0203 12:29:59.669977   13136 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 12:30:02.001431   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0203 12:30:02.001431   13136 machine.go:96] duration metric: took 42.6582333s to provisionDockerMachine
	I0203 12:30:02.001431   13136 start.go:293] postStartSetup for "multinode-749300-m02" (driver="hyperv")
	I0203 12:30:02.001431   13136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 12:30:02.010261   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 12:30:02.011020   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:30:04.023138   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:30:04.023870   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:04.023870   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:30:06.435420   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:30:06.435420   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:06.435420   13136 sshutil.go:53] new ssh client: &{IP:172.25.12.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02\id_rsa Username:docker}
	I0203 12:30:06.553597   13136 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5431968s)
	I0203 12:30:06.560819   13136 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 12:30:06.567642   13136 command_runner.go:130] > NAME=Buildroot
	I0203 12:30:06.567642   13136 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0203 12:30:06.567642   13136 command_runner.go:130] > ID=buildroot
	I0203 12:30:06.567642   13136 command_runner.go:130] > VERSION_ID=2023.02.9
	I0203 12:30:06.567642   13136 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0203 12:30:06.567642   13136 info.go:137] Remote host: Buildroot 2023.02.9
	I0203 12:30:06.567642   13136 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0203 12:30:06.567642   13136 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0203 12:30:06.569339   13136 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> 54522.pem in /etc/ssl/certs
	I0203 12:30:06.569339   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /etc/ssl/certs/54522.pem
	I0203 12:30:06.577353   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 12:30:06.595518   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /etc/ssl/certs/54522.pem (1708 bytes)
	I0203 12:30:06.639303   13136 start.go:296] duration metric: took 4.6378206s for postStartSetup
	I0203 12:30:06.639391   13136 fix.go:56] duration metric: took 1m22.4662947s for fixHost
	I0203 12:30:06.639477   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:30:08.602935   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:30:08.603367   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:08.603470   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:30:10.924979   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:30:10.924979   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:10.929021   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:30:10.929083   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.83 22 <nil> <nil>}
	I0203 12:30:10.929083   13136 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0203 12:30:11.068118   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738585811.083018155
	
	I0203 12:30:11.068200   13136 fix.go:216] guest clock: 1738585811.083018155
	I0203 12:30:11.068200   13136 fix.go:229] Guest: 2025-02-03 12:30:11.083018155 +0000 UTC Remote: 2025-02-03 12:30:06.639391 +0000 UTC m=+283.133881701 (delta=4.443627155s)
	I0203 12:30:11.068274   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:30:13.010546   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:30:13.010546   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:13.011033   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:30:15.371836   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:30:15.371836   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:15.375529   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:30:15.376274   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.12.83 22 <nil> <nil>}
	I0203 12:30:15.376274   13136 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1738585811
	I0203 12:30:15.522276   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb  3 12:30:11 UTC 2025
	
	I0203 12:30:15.522381   13136 fix.go:236] clock set: Mon Feb  3 12:30:11 UTC 2025
	 (err=<nil>)
	I0203 12:30:15.522381   13136 start.go:83] releasing machines lock for "multinode-749300-m02", held for 1m31.349344s
	I0203 12:30:15.522566   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:30:17.465298   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:30:17.465922   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:17.465922   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:30:19.830610   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:30:19.830610   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:19.833741   13136 out.go:177] * Found network options:
	I0203 12:30:19.837143   13136 out.go:177]   - NO_PROXY=172.25.12.244
	W0203 12:30:19.839410   13136 proxy.go:119] fail to check proxy env: Error ip not in block
	I0203 12:30:19.842013   13136 out.go:177]   - NO_PROXY=172.25.12.244
	W0203 12:30:19.843510   13136 proxy.go:119] fail to check proxy env: Error ip not in block
	W0203 12:30:19.844509   13136 proxy.go:119] fail to check proxy env: Error ip not in block
	I0203 12:30:19.846634   13136 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0203 12:30:19.846634   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:30:19.853415   13136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0203 12:30:19.853415   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:30:21.870647   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:30:21.870647   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:21.870827   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:30:21.888685   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:30:21.888685   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:21.889685   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:30:24.276259   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:30:24.276259   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:24.277466   13136 sshutil.go:53] new ssh client: &{IP:172.25.12.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02\id_rsa Username:docker}
	I0203 12:30:24.299754   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:30:24.299754   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:24.299754   13136 sshutil.go:53] new ssh client: &{IP:172.25.12.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02\id_rsa Username:docker}
	I0203 12:30:24.376017   13136 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0203 12:30:24.376257   13136 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5227912s)
	W0203 12:30:24.376338   13136 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 12:30:24.383423   13136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 12:30:24.389052   13136 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0203 12:30:24.389052   13136 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.542367s)
	W0203 12:30:24.389505   13136 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0203 12:30:24.418504   13136 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0203 12:30:24.418504   13136 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0203 12:30:24.418504   13136 start.go:495] detecting cgroup driver to use...
	I0203 12:30:24.418504   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 12:30:24.451525   13136 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0203 12:30:24.459581   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0203 12:30:24.488217   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0203 12:30:24.508414   13136 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 12:30:24.516260   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0203 12:30:24.544176   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 12:30:24.571678   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 12:30:24.598339   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W0203 12:30:24.610327   13136 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0203 12:30:24.611150   13136 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0203 12:30:24.629388   13136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 12:30:24.660168   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 12:30:24.689929   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0203 12:30:24.718942   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0203 12:30:24.747741   13136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 12:30:24.765714   13136 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 12:30:24.766184   13136 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0203 12:30:24.774355   13136 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0203 12:30:24.810504   13136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 12:30:24.834121   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:30:25.010868   13136 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 12:30:25.047072   13136 start.go:495] detecting cgroup driver to use...
	I0203 12:30:25.055414   13136 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 12:30:25.076455   13136 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0203 12:30:25.076455   13136 command_runner.go:130] > [Unit]
	I0203 12:30:25.076455   13136 command_runner.go:130] > Description=Docker Application Container Engine
	I0203 12:30:25.076455   13136 command_runner.go:130] > Documentation=https://docs.docker.com
	I0203 12:30:25.076455   13136 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0203 12:30:25.077200   13136 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0203 12:30:25.077200   13136 command_runner.go:130] > StartLimitBurst=3
	I0203 12:30:25.077360   13136 command_runner.go:130] > StartLimitIntervalSec=60
	I0203 12:30:25.077360   13136 command_runner.go:130] > [Service]
	I0203 12:30:25.077360   13136 command_runner.go:130] > Type=notify
	I0203 12:30:25.077360   13136 command_runner.go:130] > Restart=on-failure
	I0203 12:30:25.077360   13136 command_runner.go:130] > Environment=NO_PROXY=172.25.12.244
	I0203 12:30:25.077360   13136 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0203 12:30:25.077360   13136 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0203 12:30:25.077360   13136 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0203 12:30:25.077360   13136 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0203 12:30:25.077360   13136 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0203 12:30:25.077360   13136 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0203 12:30:25.077360   13136 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0203 12:30:25.077360   13136 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0203 12:30:25.077360   13136 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0203 12:30:25.077360   13136 command_runner.go:130] > ExecStart=
	I0203 12:30:25.077360   13136 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0203 12:30:25.077360   13136 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0203 12:30:25.077360   13136 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0203 12:30:25.077360   13136 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0203 12:30:25.077360   13136 command_runner.go:130] > LimitNOFILE=infinity
	I0203 12:30:25.077360   13136 command_runner.go:130] > LimitNPROC=infinity
	I0203 12:30:25.077360   13136 command_runner.go:130] > LimitCORE=infinity
	I0203 12:30:25.077360   13136 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0203 12:30:25.077360   13136 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0203 12:30:25.077360   13136 command_runner.go:130] > TasksMax=infinity
	I0203 12:30:25.077360   13136 command_runner.go:130] > TimeoutStartSec=0
	I0203 12:30:25.077360   13136 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0203 12:30:25.077360   13136 command_runner.go:130] > Delegate=yes
	I0203 12:30:25.077360   13136 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0203 12:30:25.077360   13136 command_runner.go:130] > KillMode=process
	I0203 12:30:25.077360   13136 command_runner.go:130] > [Install]
	I0203 12:30:25.077360   13136 command_runner.go:130] > WantedBy=multi-user.target
	I0203 12:30:25.086445   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 12:30:25.116174   13136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0203 12:30:25.157577   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0203 12:30:25.192208   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 12:30:25.223508   13136 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0203 12:30:25.283980   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 12:30:25.308451   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 12:30:25.344031   13136 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0203 12:30:25.352532   13136 ssh_runner.go:195] Run: which cri-dockerd
	I0203 12:30:25.358930   13136 command_runner.go:130] > /usr/bin/cri-dockerd
	I0203 12:30:25.367553   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0203 12:30:25.384841   13136 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0203 12:30:25.426734   13136 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 12:30:25.622634   13136 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 12:30:25.795117   13136 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 12:30:25.795117   13136 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0203 12:30:25.839323   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:30:26.017146   13136 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 12:30:28.673763   13136 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6565878s)
	I0203 12:30:28.680764   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0203 12:30:28.712976   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 12:30:28.743289   13136 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0203 12:30:28.928229   13136 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 12:30:29.117571   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:30:29.304467   13136 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0203 12:30:29.341881   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0203 12:30:29.371943   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:30:29.548852   13136 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0203 12:30:29.651781   13136 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0203 12:30:29.659524   13136 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0203 12:30:29.667791   13136 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0203 12:30:29.667791   13136 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0203 12:30:29.667791   13136 command_runner.go:130] > Device: 0,22	Inode: 859         Links: 1
	I0203 12:30:29.667791   13136 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0203 12:30:29.667791   13136 command_runner.go:130] > Access: 2025-02-03 12:30:29.589054259 +0000
	I0203 12:30:29.667791   13136 command_runner.go:130] > Modify: 2025-02-03 12:30:29.589054259 +0000
	I0203 12:30:29.667919   13136 command_runner.go:130] > Change: 2025-02-03 12:30:29.593054266 +0000
	I0203 12:30:29.667919   13136 command_runner.go:130] >  Birth: -
	I0203 12:30:29.668024   13136 start.go:563] Will wait 60s for crictl version
	I0203 12:30:29.675669   13136 ssh_runner.go:195] Run: which crictl
	I0203 12:30:29.681717   13136 command_runner.go:130] > /usr/bin/crictl
	I0203 12:30:29.689217   13136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 12:30:29.739657   13136 command_runner.go:130] > Version:  0.1.0
	I0203 12:30:29.739657   13136 command_runner.go:130] > RuntimeName:  docker
	I0203 12:30:29.739657   13136 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0203 12:30:29.739657   13136 command_runner.go:130] > RuntimeApiVersion:  v1
	I0203 12:30:29.739657   13136 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0203 12:30:29.746863   13136 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 12:30:29.784543   13136 command_runner.go:130] > 27.4.0
	I0203 12:30:29.791537   13136 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 12:30:29.824172   13136 command_runner.go:130] > 27.4.0
	I0203 12:30:29.828197   13136 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0203 12:30:29.830237   13136 out.go:177]   - env NO_PROXY=172.25.12.244
	I0203 12:30:29.833206   13136 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0203 12:30:29.837211   13136 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0203 12:30:29.837211   13136 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0203 12:30:29.837211   13136 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0203 12:30:29.837211   13136 ip.go:211] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:37:32:ac Flags:up|broadcast|multicast|running}
	I0203 12:30:29.840206   13136 ip.go:214] interface addr: fe80::c77d:5c4b:3bd9:9577/64
	I0203 12:30:29.840206   13136 ip.go:214] interface addr: 172.25.0.1/20
	I0203 12:30:29.848210   13136 ssh_runner.go:195] Run: grep 172.25.0.1	host.minikube.internal$ /etc/hosts
	I0203 12:30:29.855196   13136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 12:30:29.877543   13136 mustload.go:65] Loading cluster: multinode-749300
	I0203 12:30:29.877707   13136 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:30:29.878794   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:30:31.834191   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:30:31.834191   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:31.834325   13136 host.go:66] Checking if "multinode-749300" exists ...
	I0203 12:30:31.834843   13136 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300 for IP: 172.25.12.83
	I0203 12:30:31.834843   13136 certs.go:194] generating shared ca certs ...
	I0203 12:30:31.834843   13136 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 12:30:31.835379   13136 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0203 12:30:31.835668   13136 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0203 12:30:31.835896   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0203 12:30:31.835948   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0203 12:30:31.835948   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0203 12:30:31.835948   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0203 12:30:31.836482   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem (1338 bytes)
	W0203 12:30:31.836853   13136 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452_empty.pem, impossibly tiny 0 bytes
	I0203 12:30:31.836930   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0203 12:30:31.837104   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0203 12:30:31.837357   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0203 12:30:31.837556   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0203 12:30:31.837862   13136 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem (1708 bytes)
	I0203 12:30:31.838018   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:30:31.838184   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem -> /usr/share/ca-certificates/5452.pem
	I0203 12:30:31.838271   13136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem -> /usr/share/ca-certificates/54522.pem
	I0203 12:30:31.838469   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 12:30:31.884367   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0203 12:30:31.927632   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 12:30:31.971236   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0203 12:30:32.015509   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 12:30:32.059445   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5452.pem --> /usr/share/ca-certificates/5452.pem (1338 bytes)
	I0203 12:30:32.103160   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\54522.pem --> /usr/share/ca-certificates/54522.pem (1708 bytes)
	I0203 12:30:32.156168   13136 ssh_runner.go:195] Run: openssl version
	I0203 12:30:32.164997   13136 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0203 12:30:32.173999   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 12:30:32.201808   13136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:30:32.209041   13136 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb  3 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:30:32.209041   13136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  3 10:30 /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:30:32.217562   13136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 12:30:32.228961   13136 command_runner.go:130] > b5213941
	I0203 12:30:32.238593   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 12:30:32.264594   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5452.pem && ln -fs /usr/share/ca-certificates/5452.pem /etc/ssl/certs/5452.pem"
	I0203 12:30:32.292026   13136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5452.pem
	I0203 12:30:32.298814   13136 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb  3 10:45 /usr/share/ca-certificates/5452.pem
	I0203 12:30:32.299288   13136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  3 10:45 /usr/share/ca-certificates/5452.pem
	I0203 12:30:32.307057   13136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5452.pem
	I0203 12:30:32.315958   13136 command_runner.go:130] > 51391683
	I0203 12:30:32.323055   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5452.pem /etc/ssl/certs/51391683.0"
	I0203 12:30:32.351057   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54522.pem && ln -fs /usr/share/ca-certificates/54522.pem /etc/ssl/certs/54522.pem"
	I0203 12:30:32.380654   13136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54522.pem
	I0203 12:30:32.387930   13136 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb  3 10:45 /usr/share/ca-certificates/54522.pem
	I0203 12:30:32.388042   13136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  3 10:45 /usr/share/ca-certificates/54522.pem
	I0203 12:30:32.395870   13136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54522.pem
	I0203 12:30:32.404292   13136 command_runner.go:130] > 3ec20f2e
	I0203 12:30:32.412732   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/54522.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 12:30:32.440159   13136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0203 12:30:32.446772   13136 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0203 12:30:32.446772   13136 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0203 12:30:32.446772   13136 kubeadm.go:934] updating node {m02 172.25.12.83 8443 v1.32.1 docker false true} ...
	I0203 12:30:32.446772   13136 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-749300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.12.83
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:multinode-749300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0203 12:30:32.454162   13136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0203 12:30:32.473643   13136 command_runner.go:130] > kubeadm
	I0203 12:30:32.473695   13136 command_runner.go:130] > kubectl
	I0203 12:30:32.473695   13136 command_runner.go:130] > kubelet
	I0203 12:30:32.473729   13136 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 12:30:32.481580   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0203 12:30:32.501567   13136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0203 12:30:32.531463   13136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 12:30:32.570564   13136 ssh_runner.go:195] Run: grep 172.25.12.244	control-plane.minikube.internal$ /etc/hosts
	I0203 12:30:32.577410   13136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.12.244	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 12:30:32.606757   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:30:32.793095   13136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 12:30:32.822245   13136 host.go:66] Checking if "multinode-749300" exists ...
	I0203 12:30:32.822983   13136 start.go:317] joinCluster: &{Name:multinode-749300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-749300 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.12.244 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.12.83 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.0.54 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-
provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 12:30:32.823146   13136 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.25.12.83 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0203 12:30:32.823192   13136 host.go:66] Checking if "multinode-749300-m02" exists ...
	I0203 12:30:32.823667   13136 mustload.go:65] Loading cluster: multinode-749300
	I0203 12:30:32.824088   13136 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:30:32.824567   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:30:34.845527   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:30:34.845527   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:34.846527   13136 host.go:66] Checking if "multinode-749300" exists ...
	I0203 12:30:34.846677   13136 api_server.go:166] Checking apiserver status ...
	I0203 12:30:34.855152   13136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 12:30:34.855214   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:30:36.862077   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:30:36.862077   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:36.862629   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:30:39.202108   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:30:39.202895   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:39.202895   13136 sshutil.go:53] new ssh client: &{IP:172.25.12.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\id_rsa Username:docker}
	I0203 12:30:39.318448   13136 command_runner.go:130] > 1987
	I0203 12:30:39.318448   13136 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.4631838s)
	I0203 12:30:39.326382   13136 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1987/cgroup
	W0203 12:30:39.346014   13136 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1987/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0203 12:30:39.354732   13136 ssh_runner.go:195] Run: ls
	I0203 12:30:39.362437   13136 api_server.go:253] Checking apiserver healthz at https://172.25.12.244:8443/healthz ...
	I0203 12:30:39.373178   13136 api_server.go:279] https://172.25.12.244:8443/healthz returned 200:
	ok
	I0203 12:30:39.380420   13136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl drain multinode-749300-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0203 12:30:39.519234   13136 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-dc9wq, kube-system/kube-proxy-ggnq7
	I0203 12:30:42.555132   13136 command_runner.go:130] > node/multinode-749300-m02 cordoned
	I0203 12:30:42.555272   13136 command_runner.go:130] > pod "busybox-58667487b6-c66bf" has DeletionTimestamp older than 1 seconds, skipping
	I0203 12:30:42.555272   13136 command_runner.go:130] > node/multinode-749300-m02 drained
	I0203 12:30:42.555272   13136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl drain multinode-749300-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.1748159s)
	I0203 12:30:42.555272   13136 node.go:128] successfully drained node "multinode-749300-m02"
	I0203 12:30:42.555399   13136 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0203 12:30:42.555491   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:30:44.522164   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:30:44.522164   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:44.522164   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:30:46.967766   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.83
	
	I0203 12:30:46.967821   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:46.968184   13136 sshutil.go:53] new ssh client: &{IP:172.25.12.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02\id_rsa Username:docker}
	I0203 12:30:47.402992   13136 command_runner.go:130] ! W0203 12:30:47.419505    1672 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0203 12:30:47.606618   13136 command_runner.go:130] ! W0203 12:30:47.623353    1672 cleanupnode.go:105] [reset] Failed to remove containers: failed to stop running pod fbb29dd3e5ebc489c42552b25f24ca2b8d6fb85e374593277c866a7c497f491e: rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod "busybox-58667487b6-c66bf_default" network: cni config uninitialized
	I0203 12:30:47.628787   13136 command_runner.go:130] > [preflight] Running pre-flight checks
	I0203 12:30:47.628845   13136 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0203 12:30:47.628892   13136 command_runner.go:130] > [reset] Stopping the kubelet service
	I0203 12:30:47.628892   13136 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0203 12:30:47.628929   13136 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0203 12:30:47.628971   13136 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0203 12:30:47.628997   13136 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0203 12:30:47.628997   13136 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0203 12:30:47.628997   13136 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0203 12:30:47.628997   13136 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0203 12:30:47.628997   13136 command_runner.go:130] > to reset your system's IPVS tables.
	I0203 12:30:47.628997   13136 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0203 12:30:47.628997   13136 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0203 12:30:47.628997   13136 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (5.0734887s)
	I0203 12:30:47.628997   13136 node.go:155] successfully reset node "multinode-749300-m02"
	I0203 12:30:47.629910   13136 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 12:30:47.630988   13136 kapi.go:59] client config for multinode-749300: &rest.Config{Host:"https://172.25.12.244:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-749300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-749300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x219e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 12:30:47.632226   13136 cert_rotation.go:140] Starting client certificate rotation controller
	I0203 12:30:47.632226   13136 request.go:1351] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0203 12:30:47.632226   13136 round_trippers.go:463] DELETE https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:47.632226   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:47.632226   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:47.632226   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:47.632226   13136 round_trippers.go:473]     Content-Type: application/json
	I0203 12:30:47.651082   13136 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0203 12:30:47.651082   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:47.651082   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:47.651082   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:47.651082   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:47.651082   13136 round_trippers.go:580]     Content-Length: 171
	I0203 12:30:47.651082   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:47 GMT
	I0203 12:30:47.651082   13136 round_trippers.go:580]     Audit-Id: 2ddd1a96-a225-4a38-aaa1-a67411022e02
	I0203 12:30:47.651082   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:47.651082   13136 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-749300-m02","kind":"nodes","uid":"cd5a245c-9c18-44de-8d11-2d12a3c5fd64"}}
	I0203 12:30:47.651082   13136 node.go:180] successfully deleted node "multinode-749300-m02"
	I0203 12:30:47.652083   13136 start.go:334] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.25.12.83 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0203 12:30:47.652083   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0203 12:30:47.652083   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:30:49.650917   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:30:49.650917   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:49.650917   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:30:52.005941   13136 main.go:141] libmachine: [stdout =====>] : 172.25.12.244
	
	I0203 12:30:52.005941   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:30:52.005941   13136 sshutil.go:53] new ssh client: &{IP:172.25.12.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\id_rsa Username:docker}
	I0203 12:30:52.436057   13136 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token f7bvoc.9tp7leab6i1ufi1o --discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce 
	I0203 12:30:52.436907   13136 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.7847697s)
	I0203 12:30:52.436975   13136 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.25.12.83 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0203 12:30:52.436975   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token f7bvoc.9tp7leab6i1ufi1o --discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-749300-m02"
	I0203 12:30:52.609431   13136 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 12:30:53.972004   13136 command_runner.go:130] > [preflight] Running pre-flight checks
	I0203 12:30:53.972903   13136 command_runner.go:130] > [preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
	I0203 12:30:53.972903   13136 command_runner.go:130] > [preflight] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
	I0203 12:30:53.972903   13136 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 12:30:53.972903   13136 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 12:30:53.972903   13136 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0203 12:30:53.972997   13136 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0203 12:30:53.973081   13136 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 503.565131ms
	I0203 12:30:53.973140   13136 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0203 12:30:53.973140   13136 command_runner.go:130] > This node has joined the cluster:
	I0203 12:30:53.973226   13136 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0203 12:30:53.973226   13136 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0203 12:30:53.973226   13136 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0203 12:30:53.973226   13136 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token f7bvoc.9tp7leab6i1ufi1o --discovery-token-ca-cert-hash sha256:a7f525548e78251ae619ff10a1027aee0da736388832c1856185a573daf9cbce --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-749300-m02": (1.5362337s)
	I0203 12:30:53.973338   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0203 12:30:54.179128   13136 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0203 12:30:54.368281   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-749300-m02 minikube.k8s.io/updated_at=2025_02_03T12_30_54_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d minikube.k8s.io/name=multinode-749300 minikube.k8s.io/primary=false
	I0203 12:30:54.503657   13136 command_runner.go:130] > node/multinode-749300-m02 labeled
	I0203 12:30:54.503657   13136 start.go:319] duration metric: took 21.6804317s to joinCluster
	I0203 12:30:54.503657   13136 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.25.12.83 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0203 12:30:54.504507   13136 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:30:54.506479   13136 out.go:177] * Verifying Kubernetes components...
	I0203 12:30:54.517803   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 12:30:54.703763   13136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0203 12:30:54.736542   13136 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 12:30:54.737098   13136 kapi.go:59] client config for multinode-749300: &rest.Config{Host:"https://172.25.12.244:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-749300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-749300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x219e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 12:30:54.737638   13136 node_ready.go:35] waiting up to 6m0s for node "multinode-749300-m02" to be "Ready" ...
	I0203 12:30:54.738053   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:54.738053   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:54.738053   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:54.738053   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:54.741809   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:30:54.741809   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:54.741809   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:54.741809   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:54.741809   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:54.741809   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:54 GMT
	I0203 12:30:54.741809   13136 round_trippers.go:580]     Audit-Id: 0e27248a-ca01-4565-b1d7-b55afa090727
	I0203 12:30:54.741809   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:54.741809   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2115","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0203 12:30:55.238679   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:55.238679   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:55.238679   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:55.238679   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:55.247915   13136 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0203 12:30:55.247915   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:55.247915   13136 round_trippers.go:580]     Audit-Id: 542b25a7-d059-489d-b023-1778b403c416
	I0203 12:30:55.247915   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:55.247915   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:55.247915   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:55.247915   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:55.247915   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:55 GMT
	I0203 12:30:55.247915   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2115","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0203 12:30:55.738020   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:55.738537   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:55.738537   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:55.738537   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:55.744601   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:30:55.744601   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:55.744601   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:55.744601   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:55.744601   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:55 GMT
	I0203 12:30:55.744601   13136 round_trippers.go:580]     Audit-Id: aeab1258-d0d0-4d3a-82d4-d64a6fda3876
	I0203 12:30:55.744601   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:55.744601   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:55.744601   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2115","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0203 12:30:56.238110   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:56.238110   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:56.238110   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:56.238110   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:56.241922   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:30:56.241922   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:56.241922   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:56 GMT
	I0203 12:30:56.241922   13136 round_trippers.go:580]     Audit-Id: cd6a60e2-bd4c-46ed-b8e2-a088357667b5
	I0203 12:30:56.241922   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:56.241922   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:56.241922   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:56.241922   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:56.242459   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2115","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0203 12:30:56.737953   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:56.738347   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:56.738347   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:56.738347   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:56.741047   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:30:56.742046   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:56.742046   13136 round_trippers.go:580]     Audit-Id: c4cb4f5c-2da5-4d53-89ab-8336bddbabb8
	I0203 12:30:56.742046   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:56.742046   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:56.742046   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:56.742046   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:56.742046   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:56 GMT
	I0203 12:30:56.742046   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2115","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0203 12:30:56.742046   13136 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:30:57.238071   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:57.238071   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:57.238071   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:57.238071   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:57.242848   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:30:57.242944   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:57.242944   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:57.242944   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:57.242944   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:57 GMT
	I0203 12:30:57.242944   13136 round_trippers.go:580]     Audit-Id: b847ac4c-ca87-4e4e-906c-af762bb9a7b2
	I0203 12:30:57.242944   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:57.242944   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:57.243140   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2115","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0203 12:30:57.738158   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:57.738158   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:57.738158   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:57.738158   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:57.742170   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:30:57.742241   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:57.742241   13136 round_trippers.go:580]     Audit-Id: 7f33799f-30c7-4d16-b404-1cb2056dd0b2
	I0203 12:30:57.742241   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:57.742241   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:57.742241   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:57.742241   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:57.742241   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:57 GMT
	I0203 12:30:57.742533   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2115","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0203 12:30:58.238985   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:58.238985   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:58.238985   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:58.238985   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:58.243167   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:30:58.243167   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:58.243167   13136 round_trippers.go:580]     Audit-Id: 22f1ce72-4616-426e-a5af-c76ddc116d03
	I0203 12:30:58.243167   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:58.243167   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:58.243167   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:58.243167   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:58.243167   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:58 GMT
	I0203 12:30:58.243488   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2115","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0203 12:30:58.738536   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:58.738536   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:58.738536   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:58.738536   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:58.742800   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:30:58.742800   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:58.742908   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:58.742908   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:58.742908   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:58.742908   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:58.742908   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:58 GMT
	I0203 12:30:58.742908   13136 round_trippers.go:580]     Audit-Id: 7ccf0d0a-c805-47c1-9249-d6fba4c2294e
	I0203 12:30:58.743049   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2140","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0203 12:30:58.743446   13136 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:30:59.237850   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:59.237850   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:59.237850   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:59.237850   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:59.242646   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:30:59.242646   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:59.242646   13136 round_trippers.go:580]     Audit-Id: 0f35e76d-276a-45e1-8b19-21667d4518a4
	I0203 12:30:59.242646   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:59.242646   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:59.242728   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:59.242728   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:59.242728   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:59 GMT
	I0203 12:30:59.242886   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2140","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0203 12:30:59.738416   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:30:59.738416   13136 round_trippers.go:469] Request Headers:
	I0203 12:30:59.738416   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:30:59.738416   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:30:59.742216   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:30:59.742216   13136 round_trippers.go:577] Response Headers:
	I0203 12:30:59.742216   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:30:59.742216   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:30:59.742216   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:30:59 GMT
	I0203 12:30:59.742216   13136 round_trippers.go:580]     Audit-Id: 328dc19a-de53-4f49-8f03-a7b89b9b7994
	I0203 12:30:59.742216   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:30:59.742216   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:30:59.742423   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2140","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0203 12:31:00.239542   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:00.239542   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:00.239542   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:00.239542   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:00.243306   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:31:00.243306   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:00.243306   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:00.243306   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:00.243306   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:00.243306   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:00.243306   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:00 GMT
	I0203 12:31:00.243306   13136 round_trippers.go:580]     Audit-Id: 14d46575-ebb9-4f53-a377-369436f2efed
	I0203 12:31:00.244318   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2140","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0203 12:31:00.737766   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:00.737766   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:00.737766   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:00.737766   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:00.741935   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:00.741935   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:00.742010   13136 round_trippers.go:580]     Audit-Id: bba2832a-5795-46bb-b517-1b48a45f26ea
	I0203 12:31:00.742010   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:00.742010   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:00.742010   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:00.742010   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:00.742010   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:00 GMT
	I0203 12:31:00.742191   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2140","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0203 12:31:01.238469   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:01.238469   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:01.238469   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:01.238469   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:01.242646   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:01.242775   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:01.242775   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:01.242775   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:01.242775   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:01 GMT
	I0203 12:31:01.242775   13136 round_trippers.go:580]     Audit-Id: aeeb4e6d-debf-4189-aec9-586a8ee73a54
	I0203 12:31:01.242775   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:01.242775   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:01.242893   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2140","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0203 12:31:01.243379   13136 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:31:01.738968   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:01.738968   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:01.738968   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:01.738968   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:01.743020   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:31:01.743020   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:01.743020   13136 round_trippers.go:580]     Audit-Id: c4a755bc-a420-438c-90ba-a828088500ea
	I0203 12:31:01.743020   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:01.743020   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:01.743020   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:01.743020   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:01.743020   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:01 GMT
	I0203 12:31:01.743282   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2140","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0203 12:31:02.238051   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:02.238051   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:02.238051   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:02.238051   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:02.242323   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:02.242420   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:02.242420   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:02 GMT
	I0203 12:31:02.242420   13136 round_trippers.go:580]     Audit-Id: 625f6715-d8fe-4660-826d-156e44619097
	I0203 12:31:02.242420   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:02.242420   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:02.242420   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:02.242420   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:02.243060   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2140","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0203 12:31:02.739507   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:02.739578   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:02.739578   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:02.739578   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:02.743073   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:31:02.743073   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:02.743073   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:02.743073   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:02.743171   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:02 GMT
	I0203 12:31:02.743171   13136 round_trippers.go:580]     Audit-Id: a20a5a74-bc6c-40a2-877c-3b10174146ad
	I0203 12:31:02.743171   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:02.743171   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:02.743629   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2140","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0203 12:31:03.238575   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:03.238575   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:03.238575   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:03.238575   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:03.242978   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:03.242978   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:03.243108   13136 round_trippers.go:580]     Audit-Id: a2ae7118-3676-43c4-a7d7-31f7e1042bea
	I0203 12:31:03.243108   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:03.243108   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:03.243108   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:03.243108   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:03.243108   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:03 GMT
	I0203 12:31:03.243184   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2140","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0203 12:31:03.738377   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:03.738377   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:03.738377   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:03.738377   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:03.742819   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:03.742819   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:03.742819   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:03.742936   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:03 GMT
	I0203 12:31:03.742936   13136 round_trippers.go:580]     Audit-Id: 1a6ddf04-1e71-4498-92ea-d1fdf3d4ab86
	I0203 12:31:03.742936   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:03.742936   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:03.742936   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:03.743073   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2140","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0203 12:31:03.743522   13136 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:31:04.238303   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:04.238303   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:04.238303   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:04.238303   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:04.242898   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:04.242898   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:04.243009   13136 round_trippers.go:580]     Audit-Id: ea6c1ddc-fc65-4688-9b3b-7a0dc305b988
	I0203 12:31:04.243009   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:04.243009   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:04.243009   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:04.243009   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:04.243009   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:04 GMT
	I0203 12:31:04.243113   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:04.738517   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:04.738517   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:04.738517   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:04.738517   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:04.742786   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:04.742786   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:04.742786   13136 round_trippers.go:580]     Audit-Id: 7b1c16a2-00aa-42e7-842a-a46b86b2b831
	I0203 12:31:04.742786   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:04.742786   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:04.742786   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:04.742786   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:04.742786   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:04 GMT
	I0203 12:31:04.743238   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:05.238753   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:05.238753   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:05.238753   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:05.238753   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:05.242401   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:31:05.242401   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:05.242401   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:05.242483   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:05.242483   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:05.242483   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:05 GMT
	I0203 12:31:05.242483   13136 round_trippers.go:580]     Audit-Id: a8c8aa18-4b21-4787-8d24-16b8423c93a2
	I0203 12:31:05.242483   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:05.242940   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:05.738285   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:05.738285   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:05.738285   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:05.738285   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:05.743325   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:31:05.743390   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:05.743390   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:05.743390   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:05.743390   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:05.743390   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:05.743448   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:05 GMT
	I0203 12:31:05.743448   13136 round_trippers.go:580]     Audit-Id: a24c6c6f-149d-4a5a-93aa-edd6fa1fc3c5
	I0203 12:31:05.744030   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:05.744341   13136 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:31:06.238991   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:06.239063   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:06.239063   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:06.239063   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:06.242569   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:31:06.242569   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:06.242569   13136 round_trippers.go:580]     Audit-Id: 77039a85-63e1-48ad-9427-b515055869b2
	I0203 12:31:06.242569   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:06.242569   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:06.242569   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:06.242569   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:06.242569   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:06 GMT
	I0203 12:31:06.243096   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:06.738850   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:06.739590   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:06.739590   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:06.739590   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:06.745861   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:31:06.745861   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:06.745861   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:06.745861   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:06.745861   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:06.745861   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:06 GMT
	I0203 12:31:06.745861   13136 round_trippers.go:580]     Audit-Id: 341bb909-e1f9-4ef4-a7f0-febfe84200c8
	I0203 12:31:06.745861   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:06.746612   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:07.238888   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:07.238888   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:07.238888   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:07.238888   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:07.242127   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:31:07.242563   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:07.242563   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:07.242563   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:07.242626   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:07.242626   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:07 GMT
	I0203 12:31:07.242626   13136 round_trippers.go:580]     Audit-Id: 81229084-47d2-4190-a5b0-1f92ac79a21d
	I0203 12:31:07.242626   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:07.242947   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:07.739132   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:07.739309   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:07.739309   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:07.739309   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:07.742965   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:31:07.743031   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:07.743031   13136 round_trippers.go:580]     Audit-Id: b8633751-d755-4a9d-9291-f992357d1099
	I0203 12:31:07.743031   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:07.743031   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:07.743031   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:07.743031   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:07.743105   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:07 GMT
	I0203 12:31:07.743245   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:08.238644   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:08.238644   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:08.238644   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:08.238644   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:08.243447   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:08.243536   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:08.243536   13136 round_trippers.go:580]     Audit-Id: dfea063a-de99-4654-9b92-abff80de0f2a
	I0203 12:31:08.243536   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:08.243571   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:08.243571   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:08.243571   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:08.243571   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:08 GMT
	I0203 12:31:08.243795   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:08.243795   13136 node_ready.go:53] node "multinode-749300-m02" has status "Ready":"False"
	I0203 12:31:08.738876   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:08.738876   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:08.738876   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:08.738876   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:08.743015   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:08.743015   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:08.743015   13136 round_trippers.go:580]     Audit-Id: 1df72dec-4aa8-4b03-bc00-00306ed37560
	I0203 12:31:08.743015   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:08.743015   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:08.743015   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:08.743015   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:08.743015   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:08 GMT
	I0203 12:31:08.743015   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:09.237985   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:09.237985   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:09.237985   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:09.237985   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:09.242413   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:09.242492   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:09.242492   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:09.242492   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:09 GMT
	I0203 12:31:09.242492   13136 round_trippers.go:580]     Audit-Id: da6cd810-f2ea-4514-93db-5e9eca14a8b2
	I0203 12:31:09.242492   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:09.242492   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:09.242492   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:09.242732   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:09.738187   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:09.738187   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:09.738187   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:09.738187   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:09.742607   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:09.742607   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:09.742607   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:09.742607   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:09.742607   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:09.742607   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:09.742607   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:09 GMT
	I0203 12:31:09.742607   13136 round_trippers.go:580]     Audit-Id: 05661713-4387-4f75-b655-51606d72ace1
	I0203 12:31:09.742833   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2149","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0203 12:31:10.238174   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:10.238174   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.238174   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.238174   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.243384   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:31:10.243384   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.243384   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.243384   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.243384   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.243384   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.243384   13136 round_trippers.go:580]     Audit-Id: 15498d28-4edc-4ca7-a4f5-5d46a0b5623d
	I0203 12:31:10.243384   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.243384   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2158","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3932 chars]
	I0203 12:31:10.244043   13136 node_ready.go:49] node "multinode-749300-m02" has status "Ready":"True"
	I0203 12:31:10.244125   13136 node_ready.go:38] duration metric: took 15.506313s for node "multinode-749300-m02" to be "Ready" ...
	I0203 12:31:10.244125   13136 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 12:31:10.244290   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods
	I0203 12:31:10.244290   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.244290   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.244290   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.250326   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:31:10.250326   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.250326   13136 round_trippers.go:580]     Audit-Id: 1bf4f542-2901-4c52-944a-99dc63b4edc8
	I0203 12:31:10.250326   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.250326   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.250326   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.250326   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.250326   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.251694   13136 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2160"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1962","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89844 chars]
	I0203 12:31:10.255659   13136 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:10.255659   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-v2gkp
	I0203 12:31:10.255659   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.255659   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.255659   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.259217   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:31:10.259217   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.259217   13136 round_trippers.go:580]     Audit-Id: dbcecdc4-c942-46ac-b731-cb5635ac0341
	I0203 12:31:10.259217   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.259217   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.259217   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.259217   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.259217   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.259217   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-v2gkp","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"c94a77a3-456e-41d7-b9ad-7aa97e0264a7","resourceVersion":"1962","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"8e271cd3-ba43-4561-90a8-a544c8413c57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e271cd3-ba43-4561-90a8-a544c8413c57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7044 chars]
	I0203 12:31:10.260186   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:31:10.260186   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.260186   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.260186   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.263270   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:31:10.263312   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.263312   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.263312   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.263312   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.263312   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.263312   13136 round_trippers.go:580]     Audit-Id: e79e89b8-9466-4eac-bb0d-c463e202cdf0
	I0203 12:31:10.263312   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.263443   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:31:10.263874   13136 pod_ready.go:93] pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace has status "Ready":"True"
	I0203 12:31:10.263940   13136 pod_ready.go:82] duration metric: took 8.2153ms for pod "coredns-668d6bf9bc-v2gkp" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:10.263940   13136 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:10.263999   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-749300
	I0203 12:31:10.263999   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.263999   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.263999   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.266747   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:31:10.266747   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.266747   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.266747   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.266747   13136 round_trippers.go:580]     Audit-Id: b959a1c7-63ec-4dcf-98fb-cc495338b276
	I0203 12:31:10.266747   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.266747   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.266747   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.266747   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-749300","namespace":"kube-system","uid":"a956084b-f454-4ef5-8fed-7c189cb74ab0","resourceVersion":"1876","creationTimestamp":"2025-02-03T12:27:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.25.12.244:2379","kubernetes.io/config.hash":"f85eb916773a482447e41aa40aaff233","kubernetes.io/config.mirror":"f85eb916773a482447e41aa40aaff233","kubernetes.io/config.seen":"2025-02-03T12:27:19.750780815Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:27:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6606 chars]
	I0203 12:31:10.267425   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:31:10.267479   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.267479   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.267479   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.269709   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:31:10.269709   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.269709   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.269709   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.269709   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.269709   13136 round_trippers.go:580]     Audit-Id: a19b8d33-3e44-495c-8f95-8e561bb5f764
	I0203 12:31:10.269709   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.269709   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.269709   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:31:10.269709   13136 pod_ready.go:93] pod "etcd-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:31:10.269709   13136 pod_ready.go:82] duration metric: took 5.7694ms for pod "etcd-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:10.269709   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:10.270710   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-749300
	I0203 12:31:10.270710   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.270710   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.270710   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.272538   13136 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0203 12:31:10.272538   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.272538   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.272538   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.272538   13136 round_trippers.go:580]     Audit-Id: 557ab5cc-2b1f-4331-b1cc-3281c6a147ac
	I0203 12:31:10.272538   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.272538   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.272538   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.273552   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-749300","namespace":"kube-system","uid":"72513861-07f4-4533-8f55-8b3cce215b4c","resourceVersion":"1856","creationTimestamp":"2025-02-03T12:27:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.25.12.244:8443","kubernetes.io/config.hash":"20275825c8d44051c01f8d920b297acd","kubernetes.io/config.mirror":"20275825c8d44051c01f8d920b297acd","kubernetes.io/config.seen":"2025-02-03T12:27:19.750137111Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:27:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8039 chars]
	I0203 12:31:10.274154   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:31:10.274154   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.274212   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.274212   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.275926   13136 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0203 12:31:10.275926   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.275926   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.275926   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.275926   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.275926   13136 round_trippers.go:580]     Audit-Id: b5789de2-b973-4bbd-b299-0a56a35dfbaf
	I0203 12:31:10.276721   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.276721   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.276995   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:31:10.277395   13136 pod_ready.go:93] pod "kube-apiserver-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:31:10.277395   13136 pod_ready.go:82] duration metric: took 6.685ms for pod "kube-apiserver-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:10.277395   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:10.277579   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-749300
	I0203 12:31:10.277613   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.277652   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.277652   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.279895   13136 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0203 12:31:10.279895   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.279895   13136 round_trippers.go:580]     Audit-Id: 0bac0957-4903-425c-914b-0c22e8499ab8
	I0203 12:31:10.279895   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.279895   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.279895   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.279895   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.279895   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.279895   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-749300","namespace":"kube-system","uid":"63c0818c-a0e6-40d1-b0c4-1cd633c91afb","resourceVersion":"1874","creationTimestamp":"2025-02-03T12:04:55Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c25845f184856fc216b76acafcf34ee9","kubernetes.io/config.mirror":"c25845f184856fc216b76acafcf34ee9","kubernetes.io/config.seen":"2025-02-03T12:04:55.455020645Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:04:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0203 12:31:10.279895   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:31:10.279895   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.279895   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.279895   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.286150   13136 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0203 12:31:10.286150   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.286235   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.286235   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.286235   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.286235   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.286269   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.286269   13136 round_trippers.go:580]     Audit-Id: 073ccdab-2566-412f-915e-b462c49a331a
	I0203 12:31:10.286269   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:31:10.286269   13136 pod_ready.go:93] pod "kube-controller-manager-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:31:10.286269   13136 pod_ready.go:82] duration metric: took 8.8738ms for pod "kube-controller-manager-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:10.286269   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9g92t" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:10.439730   13136 request.go:632] Waited for 153.4595ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g92t
	I0203 12:31:10.439730   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g92t
	I0203 12:31:10.439730   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.439730   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.439730   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.445099   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:31:10.445165   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.445165   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.445165   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.445165   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.445224   13136 round_trippers.go:580]     Audit-Id: 0e3d0907-e9a1-40aa-97f4-e616430abb2f
	I0203 12:31:10.445240   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.445240   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.445920   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9g92t","generateName":"kube-proxy-","namespace":"kube-system","uid":"1709b874-4fee-41f5-8d30-24912b2fa725","resourceVersion":"1844","creationTimestamp":"2025-02-03T12:05:00Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"04519c88-48ba-439f-bd57-a9c8b286d988","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04519c88-48ba-439f-bd57-a9c8b286d988\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6400 chars]
	I0203 12:31:10.638447   13136 request.go:632] Waited for 191.7891ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:31:10.638640   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:31:10.638640   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.638640   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.638640   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.642490   13136 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0203 12:31:10.642490   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.642490   13136 round_trippers.go:580]     Audit-Id: c6acb06d-a44d-49b2-a512-bfd16ce4c115
	I0203 12:31:10.642490   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.642490   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.642490   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.642490   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.642490   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.642490   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:31:10.643228   13136 pod_ready.go:93] pod "kube-proxy-9g92t" in "kube-system" namespace has status "Ready":"True"
	I0203 12:31:10.643325   13136 pod_ready.go:82] duration metric: took 357.0518ms for pod "kube-proxy-9g92t" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:10.643325   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ggnq7" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:10.838331   13136 request.go:632] Waited for 194.8983ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggnq7
	I0203 12:31:10.838331   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggnq7
	I0203 12:31:10.838331   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:10.838331   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:10.838331   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:10.843483   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:31:10.843592   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:10.843592   13136 round_trippers.go:580]     Audit-Id: 6385e06e-a930-4a15-9a26-edfb13aa566d
	I0203 12:31:10.843592   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:10.843592   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:10.843592   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:10.843592   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:10.843592   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:10 GMT
	I0203 12:31:10.843910   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ggnq7","generateName":"kube-proxy-","namespace":"kube-system","uid":"63bc9e77-90e3-40c5-9b49-e95d2bfd7426","resourceVersion":"2129","creationTimestamp":"2025-02-03T12:07:57Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"04519c88-48ba-439f-bd57-a9c8b286d988","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:07:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04519c88-48ba-439f-bd57-a9c8b286d988\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0203 12:31:11.038388   13136 request.go:632] Waited for 193.7269ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:11.038388   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m02
	I0203 12:31:11.038388   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:11.038388   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:11.038388   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:11.043185   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:11.043294   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:11.043294   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:11 GMT
	I0203 12:31:11.043294   13136 round_trippers.go:580]     Audit-Id: 10b97eb8-776c-4828-ba1e-2dd45725b8b6
	I0203 12:31:11.043294   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:11.043294   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:11.043294   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:11.043294   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:11.043576   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m02","uid":"636afda7-b6a2-409d-9b6f-87054001aca9","resourceVersion":"2158","creationTimestamp":"2025-02-03T12:30:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_30_54_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:30:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3932 chars]
	I0203 12:31:11.043947   13136 pod_ready.go:93] pod "kube-proxy-ggnq7" in "kube-system" namespace has status "Ready":"True"
	I0203 12:31:11.044052   13136 pod_ready.go:82] duration metric: took 400.7227ms for pod "kube-proxy-ggnq7" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:11.044052   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w8wrd" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:11.238789   13136 request.go:632] Waited for 194.6409ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w8wrd
	I0203 12:31:11.238789   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w8wrd
	I0203 12:31:11.238789   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:11.238789   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:11.238789   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:11.243909   13136 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0203 12:31:11.243909   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:11.243980   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:11 GMT
	I0203 12:31:11.243980   13136 round_trippers.go:580]     Audit-Id: d228ddc7-c79a-489a-b7d0-2d9f31c8686e
	I0203 12:31:11.243980   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:11.243980   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:11.243980   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:11.243980   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:11.244409   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-w8wrd","generateName":"kube-proxy-","namespace":"kube-system","uid":"f81878fa-528f-4bdf-90ec-83f54166370e","resourceVersion":"1727","creationTimestamp":"2025-02-03T12:12:30Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"04519c88-48ba-439f-bd57-a9c8b286d988","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:12:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04519c88-48ba-439f-bd57-a9c8b286d988\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6418 chars]
	I0203 12:31:11.438475   13136 request.go:632] Waited for 193.8544ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m03
	I0203 12:31:11.438475   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300-m03
	I0203 12:31:11.438475   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:11.438475   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:11.438475   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:11.443006   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:11.443006   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:11.443006   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:11.443006   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:11.443006   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:11 GMT
	I0203 12:31:11.443006   13136 round_trippers.go:580]     Audit-Id: 053409ed-3659-41e4-b123-5bac1e64643f
	I0203 12:31:11.443006   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:11.443006   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:11.443006   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300-m03","uid":"1765fbe7-e04a-4337-8284-6152642b17de","resourceVersion":"1838","creationTimestamp":"2025-02-03T12:22:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_02_03T12_22_58_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:22:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4398 chars]
	I0203 12:31:11.443669   13136 pod_ready.go:98] node "multinode-749300-m03" hosting pod "kube-proxy-w8wrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300-m03" has status "Ready":"Unknown"
	I0203 12:31:11.443669   13136 pod_ready.go:82] duration metric: took 399.6126ms for pod "kube-proxy-w8wrd" in "kube-system" namespace to be "Ready" ...
	E0203 12:31:11.443669   13136 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-749300-m03" hosting pod "kube-proxy-w8wrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-749300-m03" has status "Ready":"Unknown"
	I0203 12:31:11.443669   13136 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:11.639375   13136 request.go:632] Waited for 195.7039ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-749300
	I0203 12:31:11.639375   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-749300
	I0203 12:31:11.639375   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:11.639375   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:11.639375   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:11.643697   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:11.643697   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:11.643758   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:11.643758   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:11.643758   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:11.643758   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:11 GMT
	I0203 12:31:11.643758   13136 round_trippers.go:580]     Audit-Id: 9259a0e1-204c-41f7-b143-c7cb8df5ea00
	I0203 12:31:11.643758   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:11.644338   13136 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-749300","namespace":"kube-system","uid":"8e4c1052-9dca-466d-833b-eff318b977d7","resourceVersion":"1864","creationTimestamp":"2025-02-03T12:04:55Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a4dc8a8db691940bb17375ec22c0921e","kubernetes.io/config.mirror":"a4dc8a8db691940bb17375ec22c0921e","kubernetes.io/config.seen":"2025-02-03T12:04:55.455022345Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-02-03T12:04:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5563 chars]
	I0203 12:31:11.838525   13136 request.go:632] Waited for 193.5487ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:31:11.838918   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes/multinode-749300
	I0203 12:31:11.838918   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:11.838918   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:11.839050   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:11.843718   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:11.844695   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:11.844695   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:11 GMT
	I0203 12:31:11.844695   13136 round_trippers.go:580]     Audit-Id: 05cdf026-770b-47b7-a77e-b43c8521fdb6
	I0203 12:31:11.844695   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:11.844695   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:11.844695   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:11.844695   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:11.845047   13136 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-02-03T12:04:52Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0203 12:31:11.845142   13136 pod_ready.go:93] pod "kube-scheduler-multinode-749300" in "kube-system" namespace has status "Ready":"True"
	I0203 12:31:11.845142   13136 pod_ready.go:82] duration metric: took 401.4686ms for pod "kube-scheduler-multinode-749300" in "kube-system" namespace to be "Ready" ...
	I0203 12:31:11.845142   13136 pod_ready.go:39] duration metric: took 1.6009997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 12:31:11.845142   13136 system_svc.go:44] waiting for kubelet service to be running ....
	I0203 12:31:11.855058   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 12:31:11.881584   13136 system_svc.go:56] duration metric: took 36.4416ms WaitForService to wait for kubelet
	I0203 12:31:11.881584   13136 kubeadm.go:582] duration metric: took 17.3777323s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 12:31:11.881584   13136 node_conditions.go:102] verifying NodePressure condition ...
	I0203 12:31:12.038527   13136 request.go:632] Waited for 156.9412ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.12.244:8443/api/v1/nodes
	I0203 12:31:12.038527   13136 round_trippers.go:463] GET https://172.25.12.244:8443/api/v1/nodes
	I0203 12:31:12.038527   13136 round_trippers.go:469] Request Headers:
	I0203 12:31:12.038527   13136 round_trippers.go:473]     Accept: application/json, */*
	I0203 12:31:12.038527   13136 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0203 12:31:12.043440   13136 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0203 12:31:12.043440   13136 round_trippers.go:577] Response Headers:
	I0203 12:31:12.043440   13136 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ae97adc-153f-47f2-9fba-a98b8fb84d69
	I0203 12:31:12.043440   13136 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2649c26c-16ea-4148-b018-5de1c665cb3b
	I0203 12:31:12.043440   13136 round_trippers.go:580]     Date: Mon, 03 Feb 2025 12:31:12 GMT
	I0203 12:31:12.043440   13136 round_trippers.go:580]     Audit-Id: 94d3c108-b72a-41b4-aa10-94f8ebbb33cb
	I0203 12:31:12.043440   13136 round_trippers.go:580]     Cache-Control: no-cache, private
	I0203 12:31:12.043544   13136 round_trippers.go:580]     Content-Type: application/json
	I0203 12:31:12.043620   13136 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2162"},"items":[{"metadata":{"name":"multinode-749300","uid":"c038ee8a-6364-4b71-8e5a-614059cad2a0","resourceVersion":"1921","creationTimestamp":"2025-02-03T12:04:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-749300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"fabdc61a5ad6636a3c32d75095e383488eaa6e8d","minikube.k8s.io/name":"multinode-749300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_02_03T12_04_56_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15605 chars]
	I0203 12:31:12.044565   13136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 12:31:12.044565   13136 node_conditions.go:123] node cpu capacity is 2
	I0203 12:31:12.044565   13136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 12:31:12.044565   13136 node_conditions.go:123] node cpu capacity is 2
	I0203 12:31:12.044565   13136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0203 12:31:12.044565   13136 node_conditions.go:123] node cpu capacity is 2
	I0203 12:31:12.044565   13136 node_conditions.go:105] duration metric: took 162.9793ms to run NodePressure ...
	I0203 12:31:12.044565   13136 start.go:241] waiting for startup goroutines ...
	I0203 12:31:12.045013   13136 start.go:255] writing updated cluster config ...
	I0203 12:31:12.048785   13136 out.go:201] 
	I0203 12:31:12.052350   13136 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:31:12.064211   13136 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:31:12.065390   13136 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\config.json ...
	I0203 12:31:12.070323   13136 out.go:177] * Starting "multinode-749300-m03" worker node in "multinode-749300" cluster
	I0203 12:31:12.073187   13136 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 12:31:12.073187   13136 cache.go:56] Caching tarball of preloaded images
	I0203 12:31:12.074245   13136 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 12:31:12.074245   13136 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0203 12:31:12.074245   13136 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\config.json ...
	I0203 12:31:12.080973   13136 start.go:360] acquireMachinesLock for multinode-749300-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0203 12:31:12.081616   13136 start.go:364] duration metric: took 642.7µs to acquireMachinesLock for "multinode-749300-m03"
	I0203 12:31:12.081652   13136 start.go:96] Skipping create...Using existing machine configuration
	I0203 12:31:12.081806   13136 fix.go:54] fixHost starting: m03
	I0203 12:31:12.081964   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m03 ).state
	I0203 12:31:14.027593   13136 main.go:141] libmachine: [stdout =====>] : Off
	
	I0203 12:31:14.028145   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:14.028145   13136 fix.go:112] recreateIfNeeded on multinode-749300-m03: state=Stopped err=<nil>
	W0203 12:31:14.028145   13136 fix.go:138] unexpected machine state, will restart: <nil>
	I0203 12:31:14.031469   13136 out.go:177] * Restarting existing hyperv VM for "multinode-749300-m03" ...
	I0203 12:31:14.035182   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-749300-m03
	I0203 12:31:16.943565   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:31:16.943565   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:16.943565   13136 main.go:141] libmachine: Waiting for host to start...
	I0203 12:31:16.943565   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m03 ).state
	I0203 12:31:19.063291   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:31:19.063714   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:19.063714   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 12:31:21.386102   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:31:21.387981   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:22.388865   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m03 ).state
	I0203 12:31:24.442376   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:31:24.442376   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:24.442376   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 12:31:26.810538   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:31:26.811297   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:27.811882   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m03 ).state
	I0203 12:31:29.825160   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:31:29.825745   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:29.825823   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 12:31:32.144905   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:31:32.144905   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:33.145802   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m03 ).state
	I0203 12:31:35.189490   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:31:35.189490   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:35.189490   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 12:31:37.501094   13136 main.go:141] libmachine: [stdout =====>] : 
	I0203 12:31:37.501094   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:38.501981   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m03 ).state
	I0203 12:31:40.529276   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:31:40.529276   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:40.529716   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 12:31:42.937766   13136 main.go:141] libmachine: [stdout =====>] : 172.25.1.188
	
	I0203 12:31:42.937822   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:42.939705   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m03 ).state
	I0203 12:31:44.904301   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:31:44.905326   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:44.905524   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 12:31:47.295389   13136 main.go:141] libmachine: [stdout =====>] : 172.25.1.188
	
	I0203 12:31:47.295734   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:47.295809   13136 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-749300\config.json ...
	I0203 12:31:47.297945   13136 machine.go:93] provisionDockerMachine start ...
	I0203 12:31:47.297945   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m03 ).state
	I0203 12:31:49.263502   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:31:49.263502   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:49.263587   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 12:31:51.646635   13136 main.go:141] libmachine: [stdout =====>] : 172.25.1.188
	
	I0203 12:31:51.647229   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:51.651422   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:31:51.651997   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.1.188 22 <nil> <nil>}
	I0203 12:31:51.651997   13136 main.go:141] libmachine: About to run SSH command:
	hostname
	I0203 12:31:51.777036   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0203 12:31:51.777036   13136 buildroot.go:166] provisioning hostname "multinode-749300-m03"
	I0203 12:31:51.777036   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m03 ).state
	I0203 12:31:53.735936   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:31:53.735936   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:53.736004   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 12:31:56.074362   13136 main.go:141] libmachine: [stdout =====>] : 172.25.1.188
	
	I0203 12:31:56.074362   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:56.081116   13136 main.go:141] libmachine: Using SSH client type: native
	I0203 12:31:56.081776   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5f5360] 0x5f7ea0 <nil>  [] 0s} 172.25.1.188 22 <nil> <nil>}
	I0203 12:31:56.081776   13136 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-749300-m03 && echo "multinode-749300-m03" | sudo tee /etc/hostname
	I0203 12:31:56.249106   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-749300-m03
	
	I0203 12:31:56.249178   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m03 ).state
	I0203 12:31:58.207279   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:31:58.207279   13136 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:31:58.207279   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m03 ).networkadapters[0]).ipaddresses[0]
	
	
	==> Docker <==
	Feb 03 12:27:57 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:57.590947498Z" level=info msg="shim disconnected" id=edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578 namespace=moby
	Feb 03 12:27:57 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:57.591492803Z" level=warning msg="cleaning up after shim disconnected" id=edf3d4284acbb2e7300d551961e3443ac50cb2d668630fc999b3805c4062c578 namespace=moby
	Feb 03 12:27:57 multinode-749300 dockerd[1107]: time="2025-02-03T12:27:57.591599004Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.013597299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.013673700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.013692300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 12:28:11 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:11.014212603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223402731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223571532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223587232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.223671032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.236644911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.237659918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.237678218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.238007320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 12:28:30 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:28:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d290c79ddbf8dbaaae0ac6ae29ff1695c351eb244341bb86dfa66bd51e407af5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 03 12:28:30 multinode-749300 cri-dockerd[1378]: time="2025-02-03T12:28:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ac5f0bf5197cf2f2f9c600a6d9f77ea7775ba4c80a3a3c30272ea8dc42d9f4e2/resolv.conf as [nameserver 172.25.0.1]"
	Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.741947665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.742072666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.742088066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.742520068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783254697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783521498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783775700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 03 12:28:30 multinode-749300 dockerd[1107]: time="2025-02-03T12:28:30.783932101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	edb5f00f10420       c69fa2e9cbf5f                                                                                         4 minutes ago       Running             coredns                   1                   ac5f0bf5197cf       coredns-668d6bf9bc-v2gkp
	0ff3e07f2982f       8c811b4aec35f                                                                                         4 minutes ago       Running             busybox                   1                   d290c79ddbf8d       busybox-58667487b6-zgvmd
	7cbc7a552a4c3       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       2                   1eece224f54eb       storage-provisioner
	644890f5738e5       d300845f67aeb                                                                                         5 minutes ago       Running             kindnet-cni               1                   c682ff8834bf4       kindnet-h6m57
	edf3d4284acbb       6e38f40d628db                                                                                         5 minutes ago       Exited              storage-provisioner       1                   1eece224f54eb       storage-provisioner
	cf33452e72443       e29f9c7391fd9                                                                                         5 minutes ago       Running             kube-proxy                1                   c4912e7d3383e       kube-proxy-9g92t
	09707a8629658       a9e7e6b294baf                                                                                         5 minutes ago       Running             etcd                      0                   fc833a943f11f       etcd-multinode-749300
	2e43c2ecb4a92       2b0d6572d062c                                                                                         5 minutes ago       Running             kube-scheduler            1                   e2da6b5a5bd1b       kube-scheduler-multinode-749300
	fa5ab1df89857       019ee182b58e2                                                                                         5 minutes ago       Running             kube-controller-manager   1                   d8732fe7d2435       kube-controller-manager-multinode-749300
	6c19e0a0ba9c0       95c0bda56fc4d                                                                                         5 minutes ago       Running             kube-apiserver            0                   264f9c1c2c05f       kube-apiserver-multinode-749300
	f42690726d50f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   efcd217a3204d       busybox-58667487b6-zgvmd
	fe91a8d012aee       c69fa2e9cbf5f                                                                                         27 minutes ago      Exited              coredns                   0                   26e5557dc32ce       coredns-668d6bf9bc-v2gkp
	fab2d9be6b5c7       kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26              27 minutes ago      Exited              kindnet-cni               0                   cb49b32ba0852       kindnet-h6m57
	c6dc514e98f69       e29f9c7391fd9                                                                                         27 minutes ago      Exited              kube-proxy                0                   1ff01fa7d8c67       kube-proxy-9g92t
	8ade10c0fb096       019ee182b58e2                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   b1b473818438d       kube-controller-manager-multinode-749300
	88c40ca9aa3cb       2b0d6572d062c                                                                                         27 minutes ago      Exited              kube-scheduler            0                   d8d9e598659ff       kube-scheduler-multinode-749300
	
	
	==> coredns [edb5f00f1042] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e8130cfa8e96169e54fdb81903f9b4680c96074b93281de316a617894d613269c265db78cbf1be00f04df6f27627d689838921ad115c7f1fadc26b632a43f17
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49536 - 20223 "HINFO IN 8316577845745372206.6425600211286211531. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049207769s
	
	
	==> coredns [fe91a8d012ae] <==
	[INFO] 10.244.0.3:48199 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000275502s
	[INFO] 10.244.0.3:40769 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194202s
	[INFO] 10.244.0.3:56613 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000241303s
	[INFO] 10.244.0.3:36390 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000127501s
	[INFO] 10.244.0.3:49253 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150501s
	[INFO] 10.244.0.3:53291 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115601s
	[INFO] 10.244.0.3:37098 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000782s
	[INFO] 10.244.1.2:47927 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154002s
	[INFO] 10.244.1.2:49855 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156202s
	[INFO] 10.244.1.2:51176 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114201s
	[INFO] 10.244.1.2:45626 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156701s
	[INFO] 10.244.0.3:33142 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141402s
	[INFO] 10.244.0.3:36637 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000249602s
	[INFO] 10.244.0.3:34293 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135301s
	[INFO] 10.244.0.3:59245 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112701s
	[INFO] 10.244.1.2:56139 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200702s
	[INFO] 10.244.1.2:53567 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131301s
	[INFO] 10.244.1.2:55778 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000182502s
	[INFO] 10.244.1.2:53486 - 5 "PTR IN 1.0.25.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000163702s
	[INFO] 10.244.0.3:52745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191702s
	[INFO] 10.244.0.3:38587 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132301s
	[INFO] 10.244.0.3:53685 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078101s
	[INFO] 10.244.0.3:38406 - 5 "PTR IN 1.0.25.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000076301s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-749300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-749300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	                    minikube.k8s.io/name=multinode-749300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_03T12_04_56_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Feb 2025 12:04:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-749300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Feb 2025 12:32:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:04:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Feb 2025 12:28:10 +0000   Mon, 03 Feb 2025 12:28:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.12.244
	  Hostname:    multinode-749300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 aa9fbed762e844a2902d570b7040a1f0
	  System UUID:                69ffc0f0-a1d7-9e4e-97f3-ed54041f4203
	  Boot ID:                    d8bb3b39-ca1e-4113-9882-57d63502f9b2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-zgvmd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 coredns-668d6bf9bc-v2gkp                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 etcd-multinode-749300                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m6s
	  kube-system                 kindnet-h6m57                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      27m
	  kube-system                 kube-apiserver-multinode-749300             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-controller-manager-multinode-749300    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-9g92t                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-multinode-749300             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 27m                    kube-proxy       
	  Normal   Starting                 5m3s                   kube-proxy       
	  Normal   Starting                 27m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    27m                    kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  27m                    kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     27m                    kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	  Normal   Starting                 27m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           27m                    node-controller  Node multinode-749300 event: Registered Node multinode-749300 in Controller
	  Normal   NodeReady                27m                    kubelet          Node multinode-749300 status is now: NodeReady
	  Normal   Starting                 5m12s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m12s (x8 over 5m12s)  kubelet          Node multinode-749300 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m12s (x8 over 5m12s)  kubelet          Node multinode-749300 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m12s (x7 over 5m12s)  kubelet          Node multinode-749300 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5m12s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 5m6s                   kubelet          Node multinode-749300 has been rebooted, boot id: d8bb3b39-ca1e-4113-9882-57d63502f9b2
	  Normal   RegisteredNode           5m3s                   node-controller  Node multinode-749300 event: Registered Node multinode-749300 in Controller
	
	
	Name:               multinode-749300-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-749300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	                    minikube.k8s.io/name=multinode-749300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_02_03T12_30_54_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Feb 2025 12:30:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-749300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Feb 2025 12:32:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Feb 2025 12:31:09 +0000   Mon, 03 Feb 2025 12:30:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Feb 2025 12:31:09 +0000   Mon, 03 Feb 2025 12:30:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Feb 2025 12:31:09 +0000   Mon, 03 Feb 2025 12:30:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Feb 2025 12:31:09 +0000   Mon, 03 Feb 2025 12:31:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.12.83
	  Hostname:    multinode-749300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 422d2c5810fb48d4b00542795df46599
	  System UUID:                4e05b2a5-08ff-3741-b04f-b8bc068a3e3b
	  Boot ID:                    6a7ae1d5-e948-48b2-a62f-f2370ab3a2ab
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-6rlj5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kindnet-dc9wq               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	  kube-system                 kube-proxy-ggnq7            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 94s                kube-proxy       
	  Normal  Starting                 24m                kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x2 over 24m)  kubelet          Node multinode-749300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x2 over 24m)  kubelet          Node multinode-749300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x2 over 24m)  kubelet          Node multinode-749300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                24m                kubelet          Node multinode-749300-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  98s (x2 over 98s)  kubelet          Node multinode-749300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s (x2 over 98s)  kubelet          Node multinode-749300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s (x2 over 98s)  kubelet          Node multinode-749300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  98s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           93s                node-controller  Node multinode-749300-m02 event: Registered Node multinode-749300-m02 in Controller
	  Normal  NodeReady                82s                kubelet          Node multinode-749300-m02 status is now: NodeReady
	
	
	Name:               multinode-749300-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-749300-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fabdc61a5ad6636a3c32d75095e383488eaa6e8d
	                    minikube.k8s.io/name=multinode-749300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_02_03T12_22_58_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Feb 2025 12:22:58 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-749300-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Feb 2025 12:23:59 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 03 Feb 2025 12:23:13 +0000   Mon, 03 Feb 2025 12:24:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.25.0.54
	  Hostname:    multinode-749300-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 38d40ad4379a4ec5b47dd7ccdbdcfdd3
	  System UUID:                605d710b-5b92-ec4e-8d85-0f6c10e8d37a
	  Boot ID:                    13f88b1f-ea06-4747-bc4f-774ad0edb09f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bckxx       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	  kube-system                 kube-proxy-w8wrd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  Starting                 9m30s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  20m (x2 over 20m)      kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     20m (x2 over 20m)      kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    20m (x2 over 20m)      kubelet          Node multinode-749300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                19m                    kubelet          Node multinode-749300-m03 status is now: NodeReady
	  Normal  CIDRAssignmentFailed     9m33s                  cidrAllocator    Node multinode-749300-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  9m33s (x2 over 9m33s)  kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m33s (x2 over 9m33s)  kubelet          Node multinode-749300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m33s (x2 over 9m33s)  kubelet          Node multinode-749300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m32s                  node-controller  Node multinode-749300-m03 event: Registered Node multinode-749300-m03 in Controller
	  Normal  NodeReady                9m18s                  kubelet          Node multinode-749300-m03 status is now: NodeReady
	  Normal  NodeNotReady             7m41s                  node-controller  Node multinode-749300-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           5m3s                   node-controller  Node multinode-749300-m03 event: Registered Node multinode-749300-m03 in Controller
	
	
	==> dmesg <==
	[  +6.580601] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.325226] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.308770] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[Feb 3 12:26] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +44.595913] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.095070] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.080250] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[Feb 3 12:27] systemd-fstab-generator[1026]: Ignoring "noauto" option for root device
	[  +0.111210] kauditd_printk_skb: 75 callbacks suppressed
	[  +0.499536] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	[  +0.200113] systemd-fstab-generator[1078]: Ignoring "noauto" option for root device
	[  +0.221690] systemd-fstab-generator[1092]: Ignoring "noauto" option for root device
	[  +2.970290] systemd-fstab-generator[1331]: Ignoring "noauto" option for root device
	[  +0.201836] systemd-fstab-generator[1343]: Ignoring "noauto" option for root device
	[  +0.192903] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +0.251653] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.851149] systemd-fstab-generator[1495]: Ignoring "noauto" option for root device
	[  +0.100990] kauditd_printk_skb: 206 callbacks suppressed
	[  +3.722313] systemd-fstab-generator[1639]: Ignoring "noauto" option for root device
	[  +1.365001] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.747815] kauditd_printk_skb: 30 callbacks suppressed
	[  +3.773287] systemd-fstab-generator[2531]: Ignoring "noauto" option for root device
	[ +27.270277] kauditd_printk_skb: 70 callbacks suppressed
	
	
	==> etcd [09707a862965] <==
	{"level":"info","ts":"2025-02-03T12:27:21.925111Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bd3b09816c9d03a4","local-member-id":"aee9b6e79987349e","added-peer-id":"aee9b6e79987349e","added-peer-peer-urls":["https://172.25.1.53:2380"]}
	{"level":"info","ts":"2025-02-03T12:27:21.926083Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bd3b09816c9d03a4","local-member-id":"aee9b6e79987349e","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-03T12:27:21.926140Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-03T12:27:21.926075Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-03T12:27:21.931282Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-02-03T12:27:21.932289Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.25.12.244:2380"}
	{"level":"info","ts":"2025-02-03T12:27:21.932461Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.25.12.244:2380"}
	{"level":"info","ts":"2025-02-03T12:27:21.932990Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aee9b6e79987349e","initial-advertise-peer-urls":["https://172.25.12.244:2380"],"listen-peer-urls":["https://172.25.12.244:2380"],"advertise-client-urls":["https://172.25.12.244:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.12.244:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-02-03T12:27:21.933175Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-02-03T12:27:23.283427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e is starting a new election at term 2"}
	{"level":"info","ts":"2025-02-03T12:27:23.283612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-03T12:27:23.283693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e received MsgPreVoteResp from aee9b6e79987349e at term 2"}
	{"level":"info","ts":"2025-02-03T12:27:23.283817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became candidate at term 3"}
	{"level":"info","ts":"2025-02-03T12:27:23.283848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e received MsgVoteResp from aee9b6e79987349e at term 3"}
	{"level":"info","ts":"2025-02-03T12:27:23.283950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aee9b6e79987349e became leader at term 3"}
	{"level":"info","ts":"2025-02-03T12:27:23.283999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aee9b6e79987349e elected leader aee9b6e79987349e at term 3"}
	{"level":"info","ts":"2025-02-03T12:27:23.298589Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aee9b6e79987349e","local-member-attributes":"{Name:multinode-749300 ClientURLs:[https://172.25.12.244:2379]}","request-path":"/0/members/aee9b6e79987349e/attributes","cluster-id":"bd3b09816c9d03a4","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-03T12:27:23.298815Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-03T12:27:23.299061Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-03T12:27:23.301663Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-03T12:27:23.301847Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-03T12:27:23.306842Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-03T12:27:23.310094Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-03T12:27:23.312993Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-03T12:27:23.319087Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.12.244:2379"}
	
	
	==> kernel <==
	 12:32:31 up 6 min,  0 users,  load average: 0.18, 0.29, 0.16
	Linux multinode-749300 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [644890f5738e] <==
	I0203 12:31:48.662541       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:31:58.657721       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:31:58.657854       1 main.go:301] handling current node
	I0203 12:31:58.657876       1 main.go:297] Handling node with IPs: map[172.25.12.83:{}]
	I0203 12:31:58.657884       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:31:58.658586       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:31:58.658673       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:32:08.660497       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:32:08.660609       1 main.go:301] handling current node
	I0203 12:32:08.660646       1 main.go:297] Handling node with IPs: map[172.25.12.83:{}]
	I0203 12:32:08.660959       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:32:08.661538       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:32:08.661630       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:32:18.657701       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:32:18.657751       1 main.go:301] handling current node
	I0203 12:32:18.657770       1 main.go:297] Handling node with IPs: map[172.25.12.83:{}]
	I0203 12:32:18.657777       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:32:18.658285       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:32:18.658455       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:32:28.657810       1 main.go:297] Handling node with IPs: map[172.25.12.83:{}]
	I0203 12:32:28.657856       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:32:28.658124       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:32:28.658224       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:32:28.658446       1 main.go:297] Handling node with IPs: map[172.25.12.244:{}]
	I0203 12:32:28.658569       1 main.go:301] handling current node
	
	
	==> kindnet [fab2d9be6b5c] <==
	I0203 12:24:19.486547       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:24:29.479544       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:24:29.480058       1 main.go:301] handling current node
	I0203 12:24:29.480294       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:24:29.480571       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:24:29.482395       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:24:29.482495       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:24:39.487057       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:24:39.487164       1 main.go:301] handling current node
	I0203 12:24:39.487184       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:24:39.487192       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:24:39.487371       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:24:39.487395       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:24:49.479049       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:24:49.479126       1 main.go:301] handling current node
	I0203 12:24:49.479266       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:24:49.479354       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:24:49.480131       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:24:49.480242       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	I0203 12:24:59.479305       1 main.go:297] Handling node with IPs: map[172.25.1.53:{}]
	I0203 12:24:59.479727       1 main.go:301] handling current node
	I0203 12:24:59.479826       1 main.go:297] Handling node with IPs: map[172.25.8.35:{}]
	I0203 12:24:59.479839       1 main.go:324] Node multinode-749300-m02 has CIDR [10.244.1.0/24] 
	I0203 12:24:59.480314       1 main.go:297] Handling node with IPs: map[172.25.0.54:{}]
	I0203 12:24:59.480509       1 main.go:324] Node multinode-749300-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [6c19e0a0ba9c] <==
	I0203 12:27:24.963020       1 autoregister_controller.go:144] Starting autoregister controller
	I0203 12:27:24.963034       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0203 12:27:24.983465       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0203 12:27:24.983682       1 policy_source.go:240] refreshing policies
	I0203 12:27:24.988524       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0203 12:27:25.002635       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0203 12:27:25.006114       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0203 12:27:25.007504       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0203 12:27:25.021232       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0203 12:27:25.021549       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0203 12:27:25.021784       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0203 12:27:25.040252       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0203 12:27:25.063391       1 cache.go:39] Caches are synced for autoregister controller
	I0203 12:27:25.063942       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0203 12:27:25.064322       1 shared_informer.go:320] Caches are synced for configmaps
	I0203 12:27:25.809340       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0203 12:27:25.881836       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0203 12:27:26.443758       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.25.12.244]
	I0203 12:27:26.447833       1 controller.go:615] quota admission added evaluator for: endpoints
	I0203 12:27:26.461396       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0203 12:27:27.972522       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0203 12:27:28.290141       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0203 12:27:28.509424       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0203 12:27:28.520726       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0203 12:27:28.561004       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [8ade10c0fb09] <==
	I0203 12:21:07.487830       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300"
	I0203 12:22:48.017949       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:22:48.044428       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:22:52.915959       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:22:58.370520       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:22:58.373994       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m03\" does not exist"
	I0203 12:22:58.409838       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300-m03" podCIDRs=["10.244.3.0/24"]
	I0203 12:22:58.410167       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	E0203 12:22:58.438530       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-749300-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-749300-m03" podCIDRs=["10.244.4.0/24"]
	E0203 12:22:58.438947       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-749300-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-749300-m03"
	E0203 12:22:58.439229       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-749300-m03': failed to patch node CIDR: Node \"multinode-749300-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0203 12:22:58.439401       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:22:58.444440       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:22:58.960922       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:22:59.994381       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:23:08.704715       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:23:13.216732       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:23:13.218582       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:23:13.233034       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:23:14.968424       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:23:15.606788       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:24:50.048901       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:24:50.049506       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:24:50.231618       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	I0203 12:24:55.449570       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m03"
	
	
	==> kube-controller-manager [fa5ab1df8985] <==
	I0203 12:30:39.624165       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="82.201µs"
	E0203 12:30:48.224997       1 gc_controller.go:151] "Failed to get node" err="node \"multinode-749300-m02\" not found" logger="pod-garbage-collector-controller" node="multinode-749300-m02"
	I0203 12:30:53.605297       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-749300-m02\" does not exist"
	I0203 12:30:53.623557       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-749300-m02" podCIDRs=["10.244.1.0/24"]
	I0203 12:30:53.624815       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:30:53.624965       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:30:53.629758       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="51.9µs"
	I0203 12:30:53.664567       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:30:53.984319       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:30:54.517395       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:30:55.446751       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="49.1µs"
	I0203 12:30:58.544033       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:31:04.049068       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:31:09.790994       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-749300-m02"
	I0203 12:31:09.791448       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:31:09.805819       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:31:09.817162       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="72.301µs"
	I0203 12:31:13.491882       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-749300-m02"
	I0203 12:31:21.559272       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="87.9µs"
	I0203 12:31:21.731521       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="50.101µs"
	I0203 12:31:21.737323       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="32.001µs"
	I0203 12:31:30.348727       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="51.601µs"
	I0203 12:31:30.373044       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="131.201µs"
	I0203 12:31:32.053599       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="14.493983ms"
	I0203 12:31:32.053673       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="40.701µs"
	
	
	==> kube-proxy [c6dc514e98f6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0203 12:05:01.805329       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0203 12:05:01.822582       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.1.53"]
	E0203 12:05:01.822737       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0203 12:05:01.878001       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0203 12:05:01.878049       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0203 12:05:01.878079       1 server_linux.go:170] "Using iptables Proxier"
	I0203 12:05:01.883741       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0203 12:05:01.884139       1 server.go:497] "Version info" version="v1.32.1"
	I0203 12:05:01.884172       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:05:01.886194       1 config.go:199] "Starting service config controller"
	I0203 12:05:01.886246       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0203 12:05:01.886272       1 config.go:105] "Starting endpoint slice config controller"
	I0203 12:05:01.886277       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0203 12:05:01.886976       1 config.go:329] "Starting node config controller"
	I0203 12:05:01.887004       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0203 12:05:01.987328       1 shared_informer.go:320] Caches are synced for node config
	I0203 12:05:01.987379       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0203 12:05:01.987536       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [cf33452e7244] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0203 12:27:28.027381       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0203 12:27:28.187333       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.12.244"]
	E0203 12:27:28.189467       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0203 12:27:28.571807       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0203 12:27:28.573724       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0203 12:27:28.574028       1 server_linux.go:170] "Using iptables Proxier"
	I0203 12:27:28.580953       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0203 12:27:28.586727       1 server.go:497] "Version info" version="v1.32.1"
	I0203 12:27:28.590708       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:27:28.619546       1 config.go:199] "Starting service config controller"
	I0203 12:27:28.621538       1 config.go:105] "Starting endpoint slice config controller"
	I0203 12:27:28.621733       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0203 12:27:28.623181       1 config.go:329] "Starting node config controller"
	I0203 12:27:28.623915       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0203 12:27:28.626746       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0203 12:27:28.627120       1 shared_informer.go:320] Caches are synced for service config
	I0203 12:27:28.722206       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0203 12:27:28.724853       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2e43c2ecb4a9] <==
	I0203 12:27:23.141470       1 serving.go:386] Generated self-signed cert in-memory
	W0203 12:27:24.897433       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0203 12:27:24.897513       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0203 12:27:24.897526       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0203 12:27:24.897538       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0203 12:27:25.033204       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0203 12:27:25.033541       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 12:27:25.041065       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0203 12:27:25.044977       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0203 12:27:25.045234       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 12:27:25.045638       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:27:25.146094       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [88c40ca9aa3c] <==
	W0203 12:04:53.471735       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0203 12:04:53.471980       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0203 12:04:53.482216       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0203 12:04:53.482267       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 12:04:53.497579       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0203 12:04:53.497628       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 12:04:53.544588       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0203 12:04:53.545097       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0203 12:04:53.614992       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0203 12:04:53.615323       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0203 12:04:53.655102       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0203 12:04:53.655499       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 12:04:53.655303       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0203 12:04:53.656094       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0203 12:04:53.713710       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0203 12:04:53.713767       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0203 12:04:53.764352       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0203 12:04:53.764706       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0203 12:04:53.799751       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0203 12:04:53.800034       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0203 12:04:56.288855       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 12:25:02.182209       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0203 12:25:02.205551       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 12:25:02.205980       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0203 12:25:02.233103       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 03 12:28:19 multinode-749300 kubelet[1646]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 03 12:28:19 multinode-749300 kubelet[1646]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 03 12:28:19 multinode-749300 kubelet[1646]: I0203 12:28:19.923723    1646 scope.go:117] "RemoveContainer" containerID="e3efb81aa459abda7cc19b8607aa9d2bc56a837cc325e672683ffa4a9d05876b"
	Feb 03 12:28:30 multinode-749300 kubelet[1646]: I0203 12:28:30.439871    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d290c79ddbf8dbaaae0ac6ae29ff1695c351eb244341bb86dfa66bd51e407af5"
	Feb 03 12:28:30 multinode-749300 kubelet[1646]: I0203 12:28:30.451444    1646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac5f0bf5197cf2f2f9c600a6d9f77ea7775ba4c80a3a3c30272ea8dc42d9f4e2"
	Feb 03 12:29:19 multinode-749300 kubelet[1646]: E0203 12:29:19.877400    1646 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 03 12:29:19 multinode-749300 kubelet[1646]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 03 12:29:19 multinode-749300 kubelet[1646]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 03 12:29:19 multinode-749300 kubelet[1646]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 03 12:29:19 multinode-749300 kubelet[1646]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 03 12:30:19 multinode-749300 kubelet[1646]: E0203 12:30:19.876234    1646 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 03 12:30:19 multinode-749300 kubelet[1646]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 03 12:30:19 multinode-749300 kubelet[1646]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 03 12:30:19 multinode-749300 kubelet[1646]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 03 12:30:19 multinode-749300 kubelet[1646]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 03 12:31:19 multinode-749300 kubelet[1646]: E0203 12:31:19.874268    1646 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 03 12:31:19 multinode-749300 kubelet[1646]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 03 12:31:19 multinode-749300 kubelet[1646]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 03 12:31:19 multinode-749300 kubelet[1646]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 03 12:31:19 multinode-749300 kubelet[1646]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 03 12:32:19 multinode-749300 kubelet[1646]: E0203 12:32:19.882207    1646 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 03 12:32:19 multinode-749300 kubelet[1646]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 03 12:32:19 multinode-749300 kubelet[1646]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 03 12:32:19 multinode-749300 kubelet[1646]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 03 12:32:19 multinode-749300 kubelet[1646]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-749300 -n multinode-749300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-749300 -n multinode-749300: (10.9959028s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-749300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (544.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (302.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-426800 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-426800 --driver=hyperv: exit status 1 (4m59.738944s)

                                                
                                                
-- stdout --
	* [NoKubernetes-426800] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20354
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-426800" primary control-plane node in "NoKubernetes-426800" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-426800 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-426800 -n NoKubernetes-426800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-426800 -n NoKubernetes-426800: exit status 7 (2.3860111s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-426800" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (302.13s)

                                                
                                    

Test pass (171/213)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 16.9
4 TestDownloadOnly/v1.20.0/preload-exists 0.06
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.35
9 TestDownloadOnly/v1.20.0/DeleteAll 1.37
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.48
12 TestDownloadOnly/v1.32.1/json-events 10.25
13 TestDownloadOnly/v1.32.1/preload-exists 0
16 TestDownloadOnly/v1.32.1/kubectl 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.34
18 TestDownloadOnly/v1.32.1/DeleteAll 1.55
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 1.49
21 TestBinaryMirror 6.47
22 TestOffline 395.16
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.25
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.27
27 TestAddons/Setup 419.3
29 TestAddons/serial/Volcano 64.63
31 TestAddons/serial/GCPAuth/Namespaces 0.3
32 TestAddons/serial/GCPAuth/FakeCredentials 11.42
35 TestAddons/parallel/Registry 32.4
36 TestAddons/parallel/Ingress 61.89
37 TestAddons/parallel/InspektorGadget 27.47
38 TestAddons/parallel/MetricsServer 20.95
40 TestAddons/parallel/CSI 89.68
41 TestAddons/parallel/Headlamp 39.48
42 TestAddons/parallel/CloudSpanner 19.17
43 TestAddons/parallel/LocalPath 83.04
44 TestAddons/parallel/NvidiaDevicePlugin 20.02
45 TestAddons/parallel/Yakd 25.48
47 TestAddons/StoppedEnableDisable 51.89
48 TestCertOptions 546.17
49 TestCertExpiration 889.01
50 TestDockerFlags 331.86
51 TestForceSystemdFlag 243.15
52 TestForceSystemdEnv 403.74
59 TestErrorSpam/start 16.02
60 TestErrorSpam/status 34.08
61 TestErrorSpam/pause 21.24
62 TestErrorSpam/unpause 21.65
63 TestErrorSpam/stop 53.04
66 TestFunctional/serial/CopySyncFile 0.04
67 TestFunctional/serial/StartWithProxy 212.55
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 118.25
70 TestFunctional/serial/KubeContext 0.11
71 TestFunctional/serial/KubectlGetPods 0.19
74 TestFunctional/serial/CacheCmd/cache/add_remote 24.1
75 TestFunctional/serial/CacheCmd/cache/add_local 9.69
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.24
77 TestFunctional/serial/CacheCmd/cache/list 0.24
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 8.68
79 TestFunctional/serial/CacheCmd/cache/cache_reload 33.54
80 TestFunctional/serial/CacheCmd/cache/delete 0.49
81 TestFunctional/serial/MinikubeKubectlCmd 0.45
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.91
83 TestFunctional/serial/ExtraConfig 121.18
84 TestFunctional/serial/ComponentHealth 0.16
85 TestFunctional/serial/LogsCmd 8.01
86 TestFunctional/serial/LogsFileCmd 9.95
87 TestFunctional/serial/InvalidService 19.44
89 TestFunctional/parallel/ConfigCmd 1.82
93 TestFunctional/parallel/StatusCmd 37.16
97 TestFunctional/parallel/ServiceCmdConnect 40.49
98 TestFunctional/parallel/AddonsCmd 0.67
99 TestFunctional/parallel/PersistentVolumeClaim 43.25
101 TestFunctional/parallel/SSHCmd 20.27
102 TestFunctional/parallel/CpCmd 51.79
103 TestFunctional/parallel/MySQL 56.71
104 TestFunctional/parallel/FileSync 9.36
105 TestFunctional/parallel/CertSync 54.43
109 TestFunctional/parallel/NodeLabels 0.18
111 TestFunctional/parallel/NonActiveRuntimeDisabled 9.39
113 TestFunctional/parallel/License 1.49
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 8.4
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 26.53
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 8.39
126 TestFunctional/parallel/ServiceCmd/List 12.59
127 TestFunctional/parallel/ProfileCmd/profile_not_create 12.78
128 TestFunctional/parallel/ServiceCmd/JSONOutput 12.61
129 TestFunctional/parallel/ProfileCmd/profile_list 12.72
131 TestFunctional/parallel/ProfileCmd/profile_json_output 13.33
132 TestFunctional/parallel/Version/short 0.23
133 TestFunctional/parallel/Version/components 7.32
135 TestFunctional/parallel/ImageCommands/ImageListShort 6.89
136 TestFunctional/parallel/ImageCommands/ImageListTable 6.83
137 TestFunctional/parallel/ImageCommands/ImageListJson 6.88
138 TestFunctional/parallel/ImageCommands/ImageListYaml 7.01
139 TestFunctional/parallel/ImageCommands/ImageBuild 25.49
140 TestFunctional/parallel/ImageCommands/Setup 2.04
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 15.38
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 14.65
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 15.72
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 7.12
146 TestFunctional/parallel/ImageCommands/ImageRemove 14.13
147 TestFunctional/parallel/DockerEnv/powershell 38.51
148 TestFunctional/parallel/UpdateContextCmd/no_changes 2.53
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.3
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.63
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 14.52
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 7.37
153 TestFunctional/delete_echo-server_images 0.19
154 TestFunctional/delete_my-image_image 0.07
155 TestFunctional/delete_minikube_cached_images 0.09
160 TestMultiControlPlane/serial/StartCluster 661.93
161 TestMultiControlPlane/serial/DeployApp 12.68
163 TestMultiControlPlane/serial/AddWorkerNode 242.74
164 TestMultiControlPlane/serial/NodeLabels 0.16
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 44.37
166 TestMultiControlPlane/serial/CopyFile 581.79
167 TestMultiControlPlane/serial/StopSecondaryNode 69.85
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 35.13
172 TestImageBuild/serial/Setup 184.61
173 TestImageBuild/serial/NormalBuild 9.71
174 TestImageBuild/serial/BuildWithBuildArg 8.17
175 TestImageBuild/serial/BuildWithDockerIgnore 7.53
176 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.63
180 TestJSONOutput/start/Command 190.17
181 TestJSONOutput/start/Audit 0.04
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 7.57
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 8.8
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 33.64
199 TestJSONOutput/stop/Audit 0.04
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.84
208 TestMainNoArgs 0.23
209 TestMinikubeProfile 501.68
212 TestMountStart/serial/StartWithMountFirst 146.18
213 TestMountStart/serial/VerifyMountFirst 8.87
214 TestMountStart/serial/StartWithMountSecond 145.24
215 TestMountStart/serial/VerifyMountSecond 8.84
216 TestMountStart/serial/DeleteFirst 26.5
217 TestMountStart/serial/VerifyMountPostDelete 8.86
218 TestMountStart/serial/Stop 26.35
219 TestMountStart/serial/RestartStopped 112.15
220 TestMountStart/serial/VerifyMountPostStop 8.79
223 TestMultiNode/serial/FreshStart2Nodes 409.54
224 TestMultiNode/serial/DeployApp2Nodes 8.38
226 TestMultiNode/serial/AddNode 222.06
227 TestMultiNode/serial/MultiNodeLabels 0.17
228 TestMultiNode/serial/ProfileList 33.35
229 TestMultiNode/serial/CopyFile 332.02
230 TestMultiNode/serial/StopNode 70.7
231 TestMultiNode/serial/StartAfterStop 176.73
236 TestPreload 472.53
237 TestScheduledStopWindows 312.12
242 TestRunningBinaryUpgrade 892.12
244 TestKubernetesUpgrade 1256.84
248 TestNoKubernetes/serial/StartNoK8sWithVersion 0.3
260 TestStoppedBinaryUpgrade/Setup 0.96
261 TestStoppedBinaryUpgrade/Upgrade 804.08
270 TestPause/serial/Start 472.68
271 TestPause/serial/SecondStartNoReconfiguration 299.4
272 TestStoppedBinaryUpgrade/MinikubeLogs 9.12
273 TestPause/serial/Pause 8.07
274 TestPause/serial/VerifyStatus 12.72
275 TestPause/serial/Unpause 8.14
276 TestPause/serial/PauseAgain 8.73
277 TestPause/serial/DeletePaused 47.46
278 TestPause/serial/VerifyDeletedResources 13.88
x
+
TestDownloadOnly/v1.20.0/json-events (16.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-436100 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-436100 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (16.8990908s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (16.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0203 10:27:22.874785    5452 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0203 10:27:22.930325    5452 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-436100
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-436100: exit status 85 (347.4398ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-436100 | minikube5\jenkins | v1.35.0 | 03 Feb 25 10:27 UTC |          |
	|         | -p download-only-436100        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/03 10:27:06
	Running on machine: minikube5
	Binary: Built with gc go1.23.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 10:27:06.066395    5620 out.go:345] Setting OutFile to fd 736 ...
	I0203 10:27:06.119854    5620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 10:27:06.119854    5620 out.go:358] Setting ErrFile to fd 708...
	I0203 10:27:06.120420    5620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0203 10:27:06.131348    5620 root.go:314] Error reading config file at C:\Users\jenkins.minikube5\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube5\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0203 10:27:06.140084    5620 out.go:352] Setting JSON to true
	I0203 10:27:06.142464    5620 start.go:129] hostinfo: {"hostname":"minikube5","uptime":163027,"bootTime":1738415398,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5371 Build 19045.5371","kernelVersion":"10.0.19045.5371 Build 19045.5371","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0203 10:27:06.142464    5620 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0203 10:27:06.148460    5620 out.go:97] [download-only-436100] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	W0203 10:27:06.148839    5620 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0203 10:27:06.148839    5620 notify.go:220] Checking for updates...
	I0203 10:27:06.151635    5620 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 10:27:06.153946    5620 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0203 10:27:06.156099    5620 out.go:169] MINIKUBE_LOCATION=20354
	I0203 10:27:06.158273    5620 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0203 10:27:06.163701    5620 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0203 10:27:06.164563    5620 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 10:27:11.111288    5620 out.go:97] Using the hyperv driver based on user configuration
	I0203 10:27:11.111288    5620 start.go:297] selected driver: hyperv
	I0203 10:27:11.111288    5620 start.go:901] validating driver "hyperv" against <nil>
	I0203 10:27:11.111288    5620 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0203 10:27:11.157783    5620 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0203 10:27:11.158995    5620 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0203 10:27:11.159735    5620 cni.go:84] Creating CNI manager for ""
	I0203 10:27:11.159735    5620 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0203 10:27:11.159735    5620 start.go:340] cluster config:
	{Name:download-only-436100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-436100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 10:27:11.160426    5620 iso.go:125] acquiring lock: {Name:mkae681ee414e9275e9685c6bbf5080b17ead976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 10:27:11.165238    5620 out.go:97] Downloading VM boot image ...
	I0203 10:27:11.165238    5620 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.35.0-amd64.iso
	I0203 10:27:14.566250    5620 out.go:97] Starting "download-only-436100" primary control-plane node in "download-only-436100" cluster
	I0203 10:27:14.566250    5620 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0203 10:27:14.634643    5620 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0203 10:27:14.634643    5620 cache.go:56] Caching tarball of preloaded images
	I0203 10:27:14.635181    5620 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0203 10:27:14.638627    5620 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0203 10:27:14.638627    5620 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0203 10:27:14.713140    5620 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0203 10:27:19.206232    5620 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0203 10:27:19.233825    5620 preload.go:254] verifying checksum of C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0203 10:27:20.212255    5620 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0203 10:27:20.212895    5620 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-436100\config.json ...
	I0203 10:27:20.213252    5620 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-436100\config.json: {Name:mk9f1141396c4d035db2ea18e92e65b07e9f2540 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 10:27:20.213918    5620 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0203 10:27:20.215663    5620 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-436100 host does not exist
	  To start a cluster, run: "minikube start -p download-only-436100"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3727872s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-436100
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-436100: (1.4805808s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (10.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-722800 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-722800 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=docker --driver=hyperv: (10.2446678s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (10.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0203 10:27:36.380592    5452 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
I0203 10:27:36.380734    5452 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
--- PASS: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-722800
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-722800: exit status 85 (335.4794ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-436100 | minikube5\jenkins | v1.35.0 | 03 Feb 25 10:27 UTC |                     |
	|         | -p download-only-436100        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube5\jenkins | v1.35.0 | 03 Feb 25 10:27 UTC | 03 Feb 25 10:27 UTC |
	| delete  | -p download-only-436100        | download-only-436100 | minikube5\jenkins | v1.35.0 | 03 Feb 25 10:27 UTC | 03 Feb 25 10:27 UTC |
	| start   | -o=json --download-only        | download-only-722800 | minikube5\jenkins | v1.35.0 | 03 Feb 25 10:27 UTC |                     |
	|         | -p download-only-722800        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/03 10:27:26
	Running on machine: minikube5
	Binary: Built with gc go1.23.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 10:27:26.245149   11388 out.go:345] Setting OutFile to fd 792 ...
	I0203 10:27:26.295914   11388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 10:27:26.295914   11388 out.go:358] Setting ErrFile to fd 744...
	I0203 10:27:26.296912   11388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 10:27:26.315669   11388 out.go:352] Setting JSON to true
	I0203 10:27:26.318673   11388 start.go:129] hostinfo: {"hostname":"minikube5","uptime":163047,"bootTime":1738415398,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5371 Build 19045.5371","kernelVersion":"10.0.19045.5371 Build 19045.5371","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0203 10:27:26.318673   11388 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0203 10:27:26.476475   11388 out.go:97] [download-only-722800] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	I0203 10:27:26.476756   11388 notify.go:220] Checking for updates...
	I0203 10:27:26.479708   11388 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 10:27:26.482421   11388 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0203 10:27:26.489007   11388 out.go:169] MINIKUBE_LOCATION=20354
	I0203 10:27:26.491253   11388 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0203 10:27:26.496477   11388 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0203 10:27:26.497053   11388 driver.go:394] Setting default libvirt URI to qemu:///system
	I0203 10:27:31.531595   11388 out.go:97] Using the hyperv driver based on user configuration
	I0203 10:27:31.531921   11388 start.go:297] selected driver: hyperv
	I0203 10:27:31.531921   11388 start.go:901] validating driver "hyperv" against <nil>
	I0203 10:27:31.531921   11388 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0203 10:27:31.575448   11388 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0203 10:27:31.576467   11388 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0203 10:27:31.576688   11388 cni.go:84] Creating CNI manager for ""
	I0203 10:27:31.576719   11388 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0203 10:27:31.576761   11388 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0203 10:27:31.576934   11388 start.go:340] cluster config:
	{Name:download-only-722800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-722800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0203 10:27:31.577244   11388 iso.go:125] acquiring lock: {Name:mkae681ee414e9275e9685c6bbf5080b17ead976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 10:27:31.580476   11388 out.go:97] Starting "download-only-722800" primary control-plane node in "download-only-722800" cluster
	I0203 10:27:31.580574   11388 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 10:27:31.640082   11388 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0203 10:27:31.640193   11388 cache.go:56] Caching tarball of preloaded images
	I0203 10:27:31.640598   11388 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 10:27:31.644987   11388 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0203 10:27:31.645531   11388 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 ...
	I0203 10:27:31.717871   11388 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4?checksum=md5:f45d35459b7bc8c69a7c5dddb9b2c151 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0203 10:27:34.191865   11388 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 ...
	I0203 10:27:34.192607   11388 preload.go:254] verifying checksum of C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 ...
	I0203 10:27:35.095968   11388 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0203 10:27:35.096972   11388 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-722800\config.json ...
	I0203 10:27:35.097528   11388 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-722800\config.json: {Name:mke504e23e08ffa88c4f4dd92d1e4e6b4e9ac832 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 10:27:35.097808   11388 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0203 10:27:35.099405   11388 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\windows\amd64\v1.32.1/kubectl.exe
	
	
	* The control-plane node download-only-722800 host does not exist
	  To start a cluster, run: "minikube start -p download-only-722800"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (1.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.5504407s)
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (1.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (1.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-722800
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-722800: (1.4939542s)
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (1.49s)

                                                
                                    
x
+
TestBinaryMirror (6.47s)

                                                
                                                
=== RUN   TestBinaryMirror
I0203 10:27:42.856469    5452 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-035700 --alsologtostderr --binary-mirror http://127.0.0.1:56849 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-035700 --alsologtostderr --binary-mirror http://127.0.0.1:56849 --driver=hyperv: (5.7970464s)
helpers_test.go:175: Cleaning up "binary-mirror-035700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-035700
--- PASS: TestBinaryMirror (6.47s)

                                                
                                    
x
+
TestOffline (395.16s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-230000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-230000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (5m55.3829592s)
helpers_test.go:175: Cleaning up "offline-docker-230000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-230000
E0203 12:54:48.991566    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-230000: (39.778813s)
--- PASS: TestOffline (395.16s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.25s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-826100
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-826100: exit status 85 (247.5885ms)

                                                
                                                
-- stdout --
	* Profile "addons-826100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-826100"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.25s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.27s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-826100
addons_test.go:950: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-826100: exit status 85 (265.9421ms)

                                                
                                                
-- stdout --
	* Profile "addons-826100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-826100"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.27s)

                                                
                                    
x
+
TestAddons/Setup (419.3s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-826100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-826100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (6m59.3026413s)
--- PASS: TestAddons/Setup (419.30s)

                                                
                                    
x
+
TestAddons/serial/Volcano (64.63s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 21.4939ms
addons_test.go:815: volcano-admission stabilized in 21.6682ms
addons_test.go:807: volcano-scheduler stabilized in 21.7658ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-qhhgh" [db1b6ebd-91a6-446e-95ae-7791cf5f333d] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0062315s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-mjps5" [1f331a80-0dfb-425b-88e3-7b073686cf85] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.0060242s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-49rs7" [4d3cb4b0-3130-4867-9c02-73b72a2f0a6a] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.0063556s
addons_test.go:842: (dbg) Run:  kubectl --context addons-826100 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-826100 create -f testdata\vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-826100 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [589e81ad-fd6e-4e66-906e-bd486a99f28d] Pending
helpers_test.go:344: "test-job-nginx-0" [589e81ad-fd6e-4e66-906e-bd486a99f28d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [589e81ad-fd6e-4e66-906e-bd486a99f28d] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 21.0075008s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-826100 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-826100 addons disable volcano --alsologtostderr -v=1: (24.7609246s)
--- PASS: TestAddons/serial/Volcano (64.63s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-826100 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-826100 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.42s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-826100 create -f testdata\busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-826100 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d8974989-0e51-42c6-a665-551d4b49a791] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d8974989-0e51-42c6-a665-551d4b49a791] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.0059112s
addons_test.go:633: (dbg) Run:  kubectl --context addons-826100 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-826100 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-826100 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-826100 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.42s)

                                                
                                    
x
+
TestAddons/parallel/Registry (32.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 13.6156ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-c8vff" [dca3591e-b8dd-46bf-816a-dd85ecd19771] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.0115045s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-pg5l6" [5704d637-771e-45ad-a24e-74a2306993dc] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0068003s
addons_test.go:331: (dbg) Run:  kubectl --context addons-826100 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-826100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-826100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.8234793s)
addons_test.go:350: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-826100 ip
addons_test.go:350: (dbg) Done: out/minikube-windows-amd64.exe -p addons-826100 ip: (2.4281706s)
2025/02/03 10:36:52 [DEBUG] GET http://172.25.10.60:5000
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-826100 addons disable registry --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-826100 addons disable registry --alsologtostderr -v=1: (13.913204s)
--- PASS: TestAddons/parallel/Registry (32.40s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (61.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-826100 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-826100 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-826100 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a83dff01-c88a-4ab4-9849-7afa2a1b3b54] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a83dff01-c88a-4ab4-9849-7afa2a1b3b54] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.0099711s
I0203 10:37:47.738623    5452 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-826100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-826100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.2579648s)
addons_test.go:286: (dbg) Run:  kubectl --context addons-826100 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-826100 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-826100 ip: (2.1908712s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.25.10.60
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-826100 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-826100 addons disable ingress-dns --alsologtostderr -v=1: (14.6048762s)
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-826100 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-826100 addons disable ingress --alsologtostderr -v=1: (20.6093425s)
--- PASS: TestAddons/parallel/Ingress (61.89s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (27.47s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-gpg52" [f13a1169-1747-4315-a3b0-a9cb09e86d9b] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0062985s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-826100 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-826100 addons disable inspektor-gadget --alsologtostderr -v=1: (21.4618696s)
--- PASS: TestAddons/parallel/InspektorGadget (27.47s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (20.95s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 11.9023ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
I0203 10:36:33.577557    5452 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0203 10:36:33.577557    5452 kapi.go:107] duration metric: took 14.3483ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "metrics-server-7fbb699795-9v8fb" [bbaa26ef-71f5-4bc6-9242-18daf7a2d066] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0062594s
addons_test.go:402: (dbg) Run:  kubectl --context addons-826100 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-826100 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-826100 addons disable metrics-server --alsologtostderr -v=1: (14.7743487s)
--- PASS: TestAddons/parallel/MetricsServer (20.95s)

                                                
                                    
x
+
TestAddons/parallel/CSI (89.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0203 10:36:33.563208    5452 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 14.4058ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-826100 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-826100 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d309bce3-3174-4d29-bfbb-3620a94e4b1a] Pending
helpers_test.go:344: "task-pv-pod" [d309bce3-3174-4d29-bfbb-3620a94e4b1a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d309bce3-3174-4d29-bfbb-3620a94e4b1a] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.006431s
addons_test.go:511: (dbg) Run:  kubectl --context addons-826100 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-826100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-826100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-826100 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-826100 delete pod task-pv-pod: (2.0793603s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-826100 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-826100 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-826100 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1a3ead0a-f8c3-4607-84b6-d5c8fdcaecb2] Pending
helpers_test.go:344: "task-pv-pod-restore" [1a3ead0a-f8c3-4607-84b6-d5c8fdcaecb2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1a3ead0a-f8c3-4607-84b6-d5c8fdcaecb2] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0058801s
addons_test.go:553: (dbg) Run:  kubectl --context addons-826100 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-826100 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-826100 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-826100 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-826100 addons disable volumesnapshots --alsologtostderr -v=1: (14.8426725s)
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-826100 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-826100 addons disable csi-hostpath-driver --alsologtostderr -v=1: (20.9370808s)
--- PASS: TestAddons/parallel/CSI (89.68s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (39.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-826100 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-826100 --alsologtostderr -v=1: (14.4130786s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-ztfjv" [bcc1e8b3-ad43-4a6b-be0b-a73fdb111517] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-ztfjv" [bcc1e8b3-ad43-4a6b-be0b-a73fdb111517] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-ztfjv" [bcc1e8b3-ad43-4a6b-be0b-a73fdb111517] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 18.0044933s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-826100 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-826100 addons disable headlamp --alsologtostderr -v=1: (7.0588558s)
--- PASS: TestAddons/parallel/Headlamp (39.48s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (19.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-hk7z7" [8e423aea-b949-4ad8-a953-2a00221483ec] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0053447s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-826100 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-826100 addons disable cloud-spanner --alsologtostderr -v=1: (14.152018s)
--- PASS: TestAddons/parallel/CloudSpanner (19.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (83.04s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-826100 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-826100 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-826100 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [5c72a477-6af6-494e-8ef7-4d92912639a1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [5c72a477-6af6-494e-8ef7-4d92912639a1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [5c72a477-6af6-494e-8ef7-4d92912639a1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0079114s
addons_test.go:906: (dbg) Run:  kubectl --context addons-826100 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-826100 ssh "cat /opt/local-path-provisioner/pvc-15c38253-0ff6-47a2-bcfb-7b8ee7197db6_default_test-pvc/file1"
addons_test.go:915: (dbg) Done: out/minikube-windows-amd64.exe -p addons-826100 ssh "cat /opt/local-path-provisioner/pvc-15c38253-0ff6-47a2-bcfb-7b8ee7197db6_default_test-pvc/file1": (9.2607292s)
addons_test.go:927: (dbg) Run:  kubectl --context addons-826100 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-826100 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-826100 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-826100 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m1.2536334s)
--- PASS: TestAddons/parallel/LocalPath (83.04s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (20.02s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-x4vss" [a8388ec7-d553-4ba1-9e1e-2d7c14173f6a] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0066441s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-826100 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-826100 addons disable nvidia-device-plugin --alsologtostderr -v=1: (14.0123264s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (20.02s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (25.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-79t5j" [6a74e6b6-911d-4b23-99ad-206fdf52c9b0] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006351s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-826100 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-826100 addons disable yakd --alsologtostderr -v=1: (19.4664891s)
--- PASS: TestAddons/parallel/Yakd (25.48s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (51.89s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-826100
addons_test.go:170: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-826100: (40.2402655s)
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-826100
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-826100: (4.6342355s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-826100
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-826100: (4.4674748s)
addons_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-826100
addons_test.go:183: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-826100: (2.5429114s)
--- PASS: TestAddons/StoppedEnableDisable (51.89s)

                                                
                                    
x
+
TestCertOptions (546.17s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-260500 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
E0203 13:09:49.001730    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 13:10:25.172321    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-260500 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (8m3.0838991s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-260500 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-260500 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (9.1865236s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-260500 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-260500 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-260500 -- "sudo cat /etc/kubernetes/admin.conf": (9.0579255s)
helpers_test.go:175: Cleaning up "cert-options-260500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-260500
E0203 13:18:28.270941    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-260500: (44.7143127s)
--- PASS: TestCertOptions (546.17s)

                                                
                                    
x
+
TestCertExpiration (889.01s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-359500 --memory=2048 --cert-expiration=3m --driver=hyperv
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-359500 --memory=2048 --cert-expiration=3m --driver=hyperv: (4m8.4428716s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-359500 --memory=2048 --cert-expiration=8760h --driver=hyperv
E0203 13:14:32.101280    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 13:14:49.005334    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 13:15:25.175798    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-359500 --memory=2048 --cert-expiration=8760h --driver=hyperv: (6m58.4310543s)
helpers_test.go:175: Cleaning up "cert-expiration-359500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-359500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-359500: (42.137289s)
--- PASS: TestCertExpiration (889.01s)

                                                
                                    
x
+
TestDockerFlags (331.86s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-686900 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-686900 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (4m33.6617227s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-686900 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-686900 ssh "sudo systemctl show docker --property=Environment --no-pager": (9.3095448s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-686900 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-686900 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (9.0516004s)
helpers_test.go:175: Cleaning up "docker-flags-686900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-686900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-686900: (39.8326406s)
--- PASS: TestDockerFlags (331.86s)

                                                
                                    
x
+
TestForceSystemdFlag (243.15s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-470300 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-470300 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (3m8.7658131s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-470300 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-470300 ssh "docker info --format {{.CgroupDriver}}": (9.0879581s)
helpers_test.go:175: Cleaning up "force-systemd-flag-470300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-470300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-470300: (45.2913481s)
--- PASS: TestForceSystemdFlag (243.15s)

                                                
                                    
x
+
TestForceSystemdEnv (403.74s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-424900 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-424900 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (5m55.8457929s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-424900 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-424900 ssh "docker info --format {{.CgroupDriver}}": (9.1548603s)
helpers_test.go:175: Cleaning up "force-systemd-env-424900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-424900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-424900: (38.7433155s)
--- PASS: TestForceSystemdEnv (403.74s)

                                                
                                    
x
+
TestErrorSpam/start (16.02s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 start --dry-run: (5.2455067s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 start --dry-run: (5.4059735s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 start --dry-run: (5.360367s)
--- PASS: TestErrorSpam/start (16.02s)

                                                
                                    
x
+
TestErrorSpam/status (34.08s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 status: (11.770806s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 status: (11.079035s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 status: (11.2241125s)
--- PASS: TestErrorSpam/status (34.08s)

                                                
                                    
x
+
TestErrorSpam/pause (21.24s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 pause: (7.2208923s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 pause: (7.0552079s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 pause: (6.963139s)
--- PASS: TestErrorSpam/pause (21.24s)

                                                
                                    
x
+
TestErrorSpam/unpause (21.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 unpause: (7.5060381s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 unpause: (7.0859118s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 unpause: (7.058164s)
--- PASS: TestErrorSpam/unpause (21.65s)

                                                
                                    
x
+
TestErrorSpam/stop (53.04s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 stop
E0203 10:44:48.904640    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 10:44:48.911031    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 10:44:48.922636    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 10:44:48.944842    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 10:44:48.986797    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 10:44:49.068682    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 10:44:49.230772    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 10:44:49.555886    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 10:44:50.197642    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 10:44:51.480278    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 10:44:54.042798    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 10:44:59.165222    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 10:45:09.407854    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 stop: (32.6806054s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 stop: (10.3359247s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 stop
E0203 10:45:29.891119    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-903900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-903900 stop: (10.0185144s)
--- PASS: TestErrorSpam/stop (53.04s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\5452\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (212.55s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-266500 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0203 10:46:10.853954    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 10:47:32.778023    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-266500 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m32.5427652s)
--- PASS: TestFunctional/serial/StartWithProxy (212.55s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (118.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0203 10:49:26.629719    5452 config.go:182] Loaded profile config "functional-266500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
functional_test.go:676: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-266500 --alsologtostderr -v=8
E0203 10:49:48.907665    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 10:50:16.622043    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-266500 --alsologtostderr -v=8: (1m58.2487838s)
functional_test.go:680: soft start took 1m58.2499024s for "functional-266500" cluster.
I0203 10:51:24.880641    5452 config.go:182] Loaded profile config "functional-266500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (118.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.11s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-266500 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (24.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 cache add registry.k8s.io/pause:3.1: (8.1901159s)
functional_test.go:1066: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 cache add registry.k8s.io/pause:3.3: (7.9904347s)
functional_test.go:1066: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 cache add registry.k8s.io/pause:latest: (7.9168736s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (24.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (9.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-266500 C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1625215617\001
functional_test.go:1094: (dbg) Done: docker build -t minikube-local-cache-test:functional-266500 C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1625215617\001: (1.8513194s)
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 cache add minikube-local-cache-test:functional-266500
functional_test.go:1106: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 cache add minikube-local-cache-test:functional-266500: (7.489382s)
functional_test.go:1111: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 cache delete minikube-local-cache-test:functional-266500
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-266500
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (9.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 ssh sudo crictl images
functional_test.go:1141: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 ssh sudo crictl images: (8.6760802s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (33.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1164: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 ssh sudo docker rmi registry.k8s.io/pause:latest: (8.7154828s)
functional_test.go:1170: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-266500 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (8.6629635s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 cache reload: (7.4611902s)
functional_test.go:1180: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1180: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (8.7018067s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (33.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.49s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 kubectl -- --context functional-266500 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.45s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.91s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out\kubectl.exe --context functional-266500 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.91s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (121.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-266500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:774: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-266500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m1.1817494s)
functional_test.go:778: restart took 2m1.1819041s for "functional-266500" cluster.
I0203 10:54:45.698646    5452 config.go:182] Loaded profile config "functional-266500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (121.18s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-266500 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.16s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 logs
E0203 10:54:48.910358    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:1253: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 logs: (8.008076s)
--- PASS: TestFunctional/serial/LogsCmd (8.01s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (9.95s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 logs --file C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1662944993\001\logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 logs --file C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1662944993\001\logs.txt: (9.9473643s)
--- PASS: TestFunctional/serial/LogsFileCmd (9.95s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (19.44s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-266500 apply -f testdata\invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-266500
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-266500: exit status 115 (15.3145629s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://172.25.14.246:31254 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_service_d27a1c5599baa2f8050d003f41b0266333639286_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-266500 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (19.44s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-266500 config get cpus: exit status 14 (252.8756ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-266500 config get cpus: exit status 14 (236.334ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (37.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 status
functional_test.go:871: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 status: (12.1361341s)
functional_test.go:877: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:877: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (13.0483075s)
functional_test.go:889: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 status -o json
functional_test.go:889: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 status -o json: (11.9745652s)
--- PASS: TestFunctional/parallel/StatusCmd (37.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (40.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-266500 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-266500 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-k9cjf" [5d8685da-f991-47ab-bc9c-7e5cbbe7110b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-k9cjf" [5d8685da-f991-47ab-bc9c-7e5cbbe7110b] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 23.0061094s
functional_test.go:1666: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 service hello-node-connect --url
functional_test.go:1666: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 service hello-node-connect --url: (17.1279335s)
functional_test.go:1672: found endpoint for hello-node-connect: http://172.25.14.246:30707
functional_test.go:1692: http://172.25.14.246:30707: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-k9cjf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.25.14.246:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.25.14.246:30707
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (40.49s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (43.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ce4a3fd4-5cfa-42b0-bcc6-0980a2d3cbac] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0068737s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-266500 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-266500 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-266500 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-266500 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [da93f3c5-92e0-421a-832f-9c3a8d4e082c] Pending
helpers_test.go:344: "sp-pod" [da93f3c5-92e0-421a-832f-9c3a8d4e082c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [da93f3c5-92e0-421a-832f-9c3a8d4e082c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.0056515s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-266500 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-266500 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-266500 delete -f testdata/storage-provisioner/pod.yaml: (2.246132s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-266500 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [530e9519-0187-4a44-8b9c-8c3588cc6e27] Pending
helpers_test.go:344: "sp-pod" [530e9519-0187-4a44-8b9c-8c3588cc6e27] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [530e9519-0187-4a44-8b9c-8c3588cc6e27] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.0097316s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-266500 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (43.25s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (20.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 ssh "echo hello"
functional_test.go:1742: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 ssh "echo hello": (10.4381342s)
functional_test.go:1759: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 ssh "cat /etc/hostname"
functional_test.go:1759: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 ssh "cat /etc/hostname": (9.8323069s)
--- PASS: TestFunctional/parallel/SSHCmd (20.27s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (51.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 cp testdata\cp-test.txt /home/docker/cp-test.txt: (8.1079323s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 ssh -n functional-266500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 ssh -n functional-266500 "sudo cat /home/docker/cp-test.txt": (9.961289s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 cp functional-266500:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd1142263615\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 cp functional-266500:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd1142263615\001\cp-test.txt: (8.8447704s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 ssh -n functional-266500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 ssh -n functional-266500 "sudo cat /home/docker/cp-test.txt": (8.7234349s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (6.6948502s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 ssh -n functional-266500 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 ssh -n functional-266500 "sudo cat /tmp/does/not/exist/cp-test.txt": (9.4521059s)
--- PASS: TestFunctional/parallel/CpCmd (51.79s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (56.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-266500 replace --force -f testdata\mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-w8779" [56955900-4fca-4e58-a173-47fc3466299a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-w8779" [56955900-4fca-4e58-a173-47fc3466299a] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 43.0061467s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-266500 exec mysql-58ccfd96bb-w8779 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-266500 exec mysql-58ccfd96bb-w8779 -- mysql -ppassword -e "show databases;": exit status 1 (264.8933ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0203 10:58:44.215919    5452 retry.go:31] will retry after 775.382975ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-266500 exec mysql-58ccfd96bb-w8779 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-266500 exec mysql-58ccfd96bb-w8779 -- mysql -ppassword -e "show databases;": exit status 1 (270.9351ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0203 10:58:45.269342    5452 retry.go:31] will retry after 1.281970067s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-266500 exec mysql-58ccfd96bb-w8779 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-266500 exec mysql-58ccfd96bb-w8779 -- mysql -ppassword -e "show databases;": exit status 1 (282.3867ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0203 10:58:46.841528    5452 retry.go:31] will retry after 2.542952088s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-266500 exec mysql-58ccfd96bb-w8779 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-266500 exec mysql-58ccfd96bb-w8779 -- mysql -ppassword -e "show databases;": exit status 1 (265.6257ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0203 10:58:49.663062    5452 retry.go:31] will retry after 4.131041864s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-266500 exec mysql-58ccfd96bb-w8779 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-266500 exec mysql-58ccfd96bb-w8779 -- mysql -ppassword -e "show databases;": exit status 1 (247.9264ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0203 10:58:54.048937    5452 retry.go:31] will retry after 2.922620378s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-266500 exec mysql-58ccfd96bb-w8779 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (56.71s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (9.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/5452/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 ssh "sudo cat /etc/test/nested/copy/5452/hosts"
functional_test.go:1948: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 ssh "sudo cat /etc/test/nested/copy/5452/hosts": (9.3555873s)
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (9.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (54.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/5452.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 ssh "sudo cat /etc/ssl/certs/5452.pem"
functional_test.go:1990: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 ssh "sudo cat /etc/ssl/certs/5452.pem": (9.1987939s)
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/5452.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 ssh "sudo cat /usr/share/ca-certificates/5452.pem"
functional_test.go:1990: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 ssh "sudo cat /usr/share/ca-certificates/5452.pem": (8.9163289s)
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1990: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 ssh "sudo cat /etc/ssl/certs/51391683.0": (9.1152053s)
functional_test.go:2016: Checking for existence of /etc/ssl/certs/54522.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 ssh "sudo cat /etc/ssl/certs/54522.pem"
functional_test.go:2017: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 ssh "sudo cat /etc/ssl/certs/54522.pem": (9.0337957s)
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/54522.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 ssh "sudo cat /usr/share/ca-certificates/54522.pem"
functional_test.go:2017: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 ssh "sudo cat /usr/share/ca-certificates/54522.pem": (9.0481822s)
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2017: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (9.1181891s)
--- PASS: TestFunctional/parallel/CertSync (54.43s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-266500 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (9.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-266500 ssh "sudo systemctl is-active crio": exit status 1 (9.3854375s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (9.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2305: (dbg) Done: out/minikube-windows-amd64.exe license: (1.4727401s)
--- PASS: TestFunctional/parallel/License (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-266500 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-266500 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-266500 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 12192: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 12004: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-266500 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-266500 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (26.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-266500 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f6edbd31-7278-4df0-94d5-f6815baafd7b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [f6edbd31-7278-4df0-94d5-f6815baafd7b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 26.0054485s
I0203 10:55:59.685634    5452 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (26.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-266500 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 11704: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-266500 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-266500 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-5wpcv" [426a6a8f-e3d5-4e9e-93d9-89d15c33db21] Pending
helpers_test.go:344: "hello-node-fcfd88b6f-5wpcv" [426a6a8f-e3d5-4e9e-93d9-89d15c33db21] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-5wpcv" [426a6a8f-e3d5-4e9e-93d9-89d15c33db21] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.0065076s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (12.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 service list
functional_test.go:1476: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 service list: (12.5892778s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (12.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (12.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1292: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (12.4955436s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (12.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (12.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 service list -o json
functional_test.go:1506: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 service list -o json: (12.6121448s)
functional_test.go:1511: Took "12.612307s" to run "out/minikube-windows-amd64.exe -p functional-266500 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (12.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (12.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1327: (dbg) Done: out/minikube-windows-amd64.exe profile list: (12.4853099s)
functional_test.go:1332: Took "12.4858404s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1346: Took "233.3297ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (12.72s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (13.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1378: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (13.0961552s)
functional_test.go:1383: Took "13.0961552s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1396: Took "234.6971ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (13.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 version --short
--- PASS: TestFunctional/parallel/Version/short (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (7.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 version -o=json --components
functional_test.go:2287: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 version -o=json --components: (7.3181043s)
--- PASS: TestFunctional/parallel/Version/components (7.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (6.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 image ls --format short --alsologtostderr
functional_test.go:278: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 image ls --format short --alsologtostderr: (6.8873935s)
functional_test.go:283: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-266500 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-266500
docker.io/kicbase/echo-server:functional-266500
functional_test.go:286: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-266500 image ls --format short --alsologtostderr:
I0203 10:58:33.403662    8716 out.go:345] Setting OutFile to fd 1252 ...
I0203 10:58:33.454674    8716 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 10:58:33.454674    8716 out.go:358] Setting ErrFile to fd 1932...
I0203 10:58:33.454674    8716 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 10:58:33.468676    8716 config.go:182] Loaded profile config "functional-266500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0203 10:58:33.468676    8716 config.go:182] Loaded profile config "functional-266500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0203 10:58:33.469672    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-266500 ).state
I0203 10:58:35.562028    8716 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 10:58:35.562113    8716 main.go:141] libmachine: [stderr =====>] : 
I0203 10:58:35.570820    8716 ssh_runner.go:195] Run: systemctl --version
I0203 10:58:35.570820    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-266500 ).state
I0203 10:58:37.605117    8716 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 10:58:37.605186    8716 main.go:141] libmachine: [stderr =====>] : 
I0203 10:58:37.605186    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-266500 ).networkadapters[0]).ipaddresses[0]
I0203 10:58:39.987560    8716 main.go:141] libmachine: [stdout =====>] : 172.25.14.246

                                                
                                                
I0203 10:58:39.987560    8716 main.go:141] libmachine: [stderr =====>] : 
I0203 10:58:39.987641    8716 sshutil.go:53] new ssh client: &{IP:172.25.14.246 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-266500\id_rsa Username:docker}
I0203 10:58:40.084428    8716 ssh_runner.go:235] Completed: systemctl --version: (4.5134323s)
I0203 10:58:40.094973    8716 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (6.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (6.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 image ls --format table --alsologtostderr
functional_test.go:278: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 image ls --format table --alsologtostderr: (6.8260341s)
functional_test.go:283: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-266500 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-266500 | e10a1718fded8 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.32.1           | 95c0bda56fc4d | 97MB   |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kicbase/echo-server               | functional-266500 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-scheduler              | v1.32.1           | 2b0d6572d062c | 69.6MB |
| registry.k8s.io/kube-proxy                  | v1.32.1           | e29f9c7391fd9 | 94MB   |
| docker.io/library/nginx                     | latest            | 9bea9f2796e23 | 192MB  |
| registry.k8s.io/kube-controller-manager     | v1.32.1           | 019ee182b58e2 | 89.7MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine            | 93f9c72967dbc | 47MB   |
| registry.k8s.io/etcd                        | 3.5.16-0          | a9e7e6b294baf | 150MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-266500 image ls --format table --alsologtostderr:
I0203 10:58:48.197894    8848 out.go:345] Setting OutFile to fd 1052 ...
I0203 10:58:48.248886    8848 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 10:58:48.248886    8848 out.go:358] Setting ErrFile to fd 1820...
I0203 10:58:48.248886    8848 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 10:58:48.261874    8848 config.go:182] Loaded profile config "functional-266500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0203 10:58:48.261874    8848 config.go:182] Loaded profile config "functional-266500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0203 10:58:48.262881    8848 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-266500 ).state
I0203 10:58:50.267822    8848 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 10:58:50.267916    8848 main.go:141] libmachine: [stderr =====>] : 
I0203 10:58:50.276354    8848 ssh_runner.go:195] Run: systemctl --version
I0203 10:58:50.276354    8848 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-266500 ).state
I0203 10:58:52.316434    8848 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 10:58:52.316480    8848 main.go:141] libmachine: [stderr =====>] : 
I0203 10:58:52.316480    8848 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-266500 ).networkadapters[0]).ipaddresses[0]
I0203 10:58:54.724187    8848 main.go:141] libmachine: [stdout =====>] : 172.25.14.246

                                                
                                                
I0203 10:58:54.724187    8848 main.go:141] libmachine: [stderr =====>] : 
I0203 10:58:54.724187    8848 sshutil.go:53] new ssh client: &{IP:172.25.14.246 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-266500\id_rsa Username:docker}
I0203 10:58:54.827112    8848 ssh_runner.go:235] Completed: systemctl --version: (4.5507048s)
I0203 10:58:54.833636    8848 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (6.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (6.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 image ls --format json --alsologtostderr
functional_test.go:278: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 image ls --format json --alsologtostderr: (6.8805571s)
functional_test.go:283: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-266500 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"69600000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-266500"],"size":"4940000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"94000000"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbf
c","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"150000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"e10a1718fded855813450bbffe8256ac047625064c627b764509a91322bda64f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-266500"],"size":"30"},{"id":"93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/libr
ary/mysql:5.7"],"size":"501000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"97000000"},{"id":"019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"89700000"}]
functional_test.go:286: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-266500 image ls --format json --alsologtostderr:
I0203 10:58:41.328624    3784 out.go:345] Setting OutFile to fd 856 ...
I0203 10:58:41.379804    3784 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 10:58:41.379804    3784 out.go:358] Setting ErrFile to fd 1876...
I0203 10:58:41.381097    3784 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 10:58:41.395337    3784 config.go:182] Loaded profile config "functional-266500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0203 10:58:41.395932    3784 config.go:182] Loaded profile config "functional-266500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0203 10:58:41.396286    3784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-266500 ).state
I0203 10:58:43.414119    3784 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 10:58:43.414214    3784 main.go:141] libmachine: [stderr =====>] : 
I0203 10:58:43.429330    3784 ssh_runner.go:195] Run: systemctl --version
I0203 10:58:43.429330    3784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-266500 ).state
I0203 10:58:45.488656    3784 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 10:58:45.488656    3784 main.go:141] libmachine: [stderr =====>] : 
I0203 10:58:45.488656    3784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-266500 ).networkadapters[0]).ipaddresses[0]
I0203 10:58:47.889975    3784 main.go:141] libmachine: [stdout =====>] : 172.25.14.246

                                                
                                                
I0203 10:58:47.889975    3784 main.go:141] libmachine: [stderr =====>] : 
I0203 10:58:47.890529    3784 sshutil.go:53] new ssh client: &{IP:172.25.14.246 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-266500\id_rsa Username:docker}
I0203 10:58:47.992482    3784 ssh_runner.go:235] Completed: systemctl --version: (4.5630996s)
I0203 10:58:47.999799    3784 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (6.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (7.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 image ls --format yaml --alsologtostderr
functional_test.go:278: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 image ls --format yaml --alsologtostderr: (7.0127326s)
functional_test.go:283: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-266500 image ls --format yaml --alsologtostderr:
- id: 9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "89700000"
- id: 93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "150000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-266500
size: "4940000"
- id: e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "94000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "97000000"
- id: 2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "69600000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: e10a1718fded855813450bbffe8256ac047625064c627b764509a91322bda64f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-266500
size: "30"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-266500 image ls --format yaml --alsologtostderr:
I0203 10:58:34.309669   12212 out.go:345] Setting OutFile to fd 1524 ...
I0203 10:58:34.398322   12212 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 10:58:34.398322   12212 out.go:358] Setting ErrFile to fd 1956...
I0203 10:58:34.398401   12212 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 10:58:34.412411   12212 config.go:182] Loaded profile config "functional-266500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0203 10:58:34.412411   12212 config.go:182] Loaded profile config "functional-266500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0203 10:58:34.413411   12212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-266500 ).state
I0203 10:58:36.578444   12212 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 10:58:36.578444   12212 main.go:141] libmachine: [stderr =====>] : 
I0203 10:58:36.587371   12212 ssh_runner.go:195] Run: systemctl --version
I0203 10:58:36.587371   12212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-266500 ).state
I0203 10:58:38.606275   12212 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 10:58:38.607142   12212 main.go:141] libmachine: [stderr =====>] : 
I0203 10:58:38.607223   12212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-266500 ).networkadapters[0]).ipaddresses[0]
I0203 10:58:41.004953   12212 main.go:141] libmachine: [stdout =====>] : 172.25.14.246

                                                
                                                
I0203 10:58:41.004953   12212 main.go:141] libmachine: [stderr =====>] : 
I0203 10:58:41.004953   12212 sshutil.go:53] new ssh client: &{IP:172.25.14.246 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-266500\id_rsa Username:docker}
I0203 10:58:41.115390   12212 ssh_runner.go:235] Completed: systemctl --version: (4.5279672s)
I0203 10:58:41.127603   12212 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (7.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (25.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-266500 ssh pgrep buildkitd: exit status 1 (8.857119s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 image build -t localhost/my-image:functional-266500 testdata\build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 image build -t localhost/my-image:functional-266500 testdata\build --alsologtostderr: (10.0922482s)
functional_test.go:340: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-266500 image build -t localhost/my-image:functional-266500 testdata\build --alsologtostderr:
I0203 10:58:49.157627    3428 out.go:345] Setting OutFile to fd 1348 ...
I0203 10:58:49.251750    3428 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 10:58:49.251750    3428 out.go:358] Setting ErrFile to fd 1464...
I0203 10:58:49.251750    3428 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0203 10:58:49.265341    3428 config.go:182] Loaded profile config "functional-266500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0203 10:58:49.284335    3428 config.go:182] Loaded profile config "functional-266500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0203 10:58:49.285343    3428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-266500 ).state
I0203 10:58:51.307507    3428 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 10:58:51.307507    3428 main.go:141] libmachine: [stderr =====>] : 
I0203 10:58:51.320723    3428 ssh_runner.go:195] Run: systemctl --version
I0203 10:58:51.320723    3428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-266500 ).state
I0203 10:58:53.335029    3428 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0203 10:58:53.335712    3428 main.go:141] libmachine: [stderr =====>] : 
I0203 10:58:53.335712    3428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-266500 ).networkadapters[0]).ipaddresses[0]
I0203 10:58:55.725413    3428 main.go:141] libmachine: [stdout =====>] : 172.25.14.246

                                                
                                                
I0203 10:58:55.725413    3428 main.go:141] libmachine: [stderr =====>] : 
I0203 10:58:55.726082    3428 sshutil.go:53] new ssh client: &{IP:172.25.14.246 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-266500\id_rsa Username:docker}
I0203 10:58:55.820537    3428 ssh_runner.go:235] Completed: systemctl --version: (4.4997023s)
I0203 10:58:55.820660    3428 build_images.go:161] Building image from path: C:\Users\jenkins.minikube5\AppData\Local\Temp\build.3641547645.tar
I0203 10:58:55.829980    3428 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0203 10:58:55.856903    3428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3641547645.tar
I0203 10:58:55.863971    3428 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3641547645.tar: stat -c "%s %y" /var/lib/minikube/build/build.3641547645.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3641547645.tar': No such file or directory
I0203 10:58:55.863971    3428 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\AppData\Local\Temp\build.3641547645.tar --> /var/lib/minikube/build/build.3641547645.tar (3072 bytes)
I0203 10:58:55.918775    3428 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3641547645
I0203 10:58:55.949031    3428 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3641547645 -xf /var/lib/minikube/build/build.3641547645.tar
I0203 10:58:55.975950    3428 docker.go:360] Building image: /var/lib/minikube/build/build.3641547645
I0203 10:58:55.983197    3428 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-266500 /var/lib/minikube/build/build.3641547645
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.2s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:6fe0bacd9366fa7c4b0dde3b19d49d9176e31820b8bd2565d940cdaadedab7df
#8 writing image sha256:6fe0bacd9366fa7c4b0dde3b19d49d9176e31820b8bd2565d940cdaadedab7df done
#8 naming to localhost/my-image:functional-266500 0.0s done
#8 DONE 0.2s
I0203 10:58:59.043741    3428 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-266500 /var/lib/minikube/build/build.3641547645: (3.0605092s)
I0203 10:58:59.051265    3428 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3641547645
I0203 10:58:59.080993    3428 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3641547645.tar
I0203 10:58:59.099786    3428 build_images.go:217] Built localhost/my-image:functional-266500 from C:\Users\jenkins.minikube5\AppData\Local\Temp\build.3641547645.tar
I0203 10:58:59.099894    3428 build_images.go:133] succeeded building to: functional-266500
I0203 10:58:59.099894    3428 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 image ls: (6.5350017s)
E0203 10:59:48.914884    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:01:11.991754    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (25.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.9291195s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-266500
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (15.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 image load --daemon kicbase/echo-server:functional-266500 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 image load --daemon kicbase/echo-server:functional-266500 --alsologtostderr: (8.2003916s)
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 image ls: (7.1772296s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (15.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (14.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 image load --daemon kicbase/echo-server:functional-266500 --alsologtostderr
functional_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 image load --daemon kicbase/echo-server:functional-266500 --alsologtostderr: (7.4985666s)
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 image ls: (7.148828s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (14.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (15.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-266500
functional_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 image load --daemon kicbase/echo-server:functional-266500 --alsologtostderr
functional_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 image load --daemon kicbase/echo-server:functional-266500 --alsologtostderr: (7.7206796s)
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 image ls: (7.1369906s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (15.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 image save kicbase/echo-server:functional-266500 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:397: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 image save kicbase/echo-server:functional-266500 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (7.1220988s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (14.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 image rm kicbase/echo-server:functional-266500 --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 image rm kicbase/echo-server:functional-266500 --alsologtostderr: (7.0006391s)
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 image ls: (7.1255647s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (14.13s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (38.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:516: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-266500 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-266500"
functional_test.go:516: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-266500 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-266500": (25.7503243s)
functional_test.go:539: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-266500 docker-env | Invoke-Expression ; docker images"
functional_test.go:539: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-266500 docker-env | Invoke-Expression ; docker images": (12.7465663s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (38.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 update-context --alsologtostderr -v=2
functional_test.go:2136: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 update-context --alsologtostderr -v=2: (2.5250822s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 update-context --alsologtostderr -v=2
functional_test.go:2136: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 update-context --alsologtostderr -v=2: (2.2996123s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 update-context --alsologtostderr -v=2
functional_test.go:2136: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 update-context --alsologtostderr -v=2: (2.625573s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (14.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:426: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (7.5716991s)
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 image ls: (6.9432681s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (14.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (7.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-266500
functional_test.go:441: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-266500 image save --daemon kicbase/echo-server:functional-266500 --alsologtostderr
functional_test.go:441: (dbg) Done: out/minikube-windows-amd64.exe -p functional-266500 image save --daemon kicbase/echo-server:functional-266500 --alsologtostderr: (7.175297s)
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-266500
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (7.37s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.19s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-266500
--- PASS: TestFunctional/delete_echo-server_images (0.19s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-266500
--- PASS: TestFunctional/delete_my-image_image (0.07s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-266500
--- PASS: TestFunctional/delete_minikube_cached_images (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (661.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-429000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0203 11:04:48.918136    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:05:25.088040    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:05:25.095297    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:05:25.107062    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:05:25.129416    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:05:25.171636    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:05:25.253763    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:05:25.416351    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:05:25.738920    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:05:26.381613    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:05:27.664492    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:05:30.226262    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:05:35.349181    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:05:45.591485    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:06:06.074712    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:06:47.037418    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:08:08.960218    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:09:48.921499    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:10:25.092072    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:10:52.803941    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-429000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (10m28.4854567s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr: (33.4414845s)
--- PASS: TestMultiControlPlane/serial/StartCluster (661.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (12.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-429000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-429000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-429000 -- rollout status deployment/busybox: (4.2975738s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-429000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-429000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-429000 -- exec busybox-58667487b6-hcrnz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-429000 -- exec busybox-58667487b6-hcrnz -- nslookup kubernetes.io: (2.0509654s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-429000 -- exec busybox-58667487b6-hjbfz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-429000 -- exec busybox-58667487b6-hjbfz -- nslookup kubernetes.io: (1.7351927s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-429000 -- exec busybox-58667487b6-k7s2q -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-429000 -- exec busybox-58667487b6-hcrnz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-429000 -- exec busybox-58667487b6-hjbfz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-429000 -- exec busybox-58667487b6-k7s2q -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-429000 -- exec busybox-58667487b6-hcrnz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-429000 -- exec busybox-58667487b6-hjbfz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-429000 -- exec busybox-58667487b6-k7s2q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (12.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (242.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-429000 -v=7 --alsologtostderr
E0203 11:15:25.096067    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:17:52.005554    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-429000 -v=7 --alsologtostderr: (3m18.2458374s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr: (44.4959408s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (242.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-429000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (44.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (44.366179s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (44.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (581.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 status --output json -v=7 --alsologtostderr
E0203 11:19:48.927817    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:20:25.098182    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:328: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 status --output json -v=7 --alsologtostderr: (44.562031s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 cp testdata\cp-test.txt ha-429000:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 cp testdata\cp-test.txt ha-429000:/home/docker/cp-test.txt: (8.8108033s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000 "sudo cat /home/docker/cp-test.txt": (8.9607169s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile602855653\001\cp-test_ha-429000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile602855653\001\cp-test_ha-429000.txt: (8.7611606s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000 "sudo cat /home/docker/cp-test.txt": (8.7404118s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000:/home/docker/cp-test.txt ha-429000-m02:/home/docker/cp-test_ha-429000_ha-429000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000:/home/docker/cp-test.txt ha-429000-m02:/home/docker/cp-test_ha-429000_ha-429000-m02.txt: (15.2906519s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000 "sudo cat /home/docker/cp-test.txt": (8.8494951s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m02 "sudo cat /home/docker/cp-test_ha-429000_ha-429000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m02 "sudo cat /home/docker/cp-test_ha-429000_ha-429000-m02.txt": (8.8059466s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000:/home/docker/cp-test.txt ha-429000-m03:/home/docker/cp-test_ha-429000_ha-429000-m03.txt
E0203 11:21:48.174241    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000:/home/docker/cp-test.txt ha-429000-m03:/home/docker/cp-test_ha-429000_ha-429000-m03.txt: (15.3600668s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000 "sudo cat /home/docker/cp-test.txt": (8.7895721s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m03 "sudo cat /home/docker/cp-test_ha-429000_ha-429000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m03 "sudo cat /home/docker/cp-test_ha-429000_ha-429000-m03.txt": (8.7716064s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000:/home/docker/cp-test.txt ha-429000-m04:/home/docker/cp-test_ha-429000_ha-429000-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000:/home/docker/cp-test.txt ha-429000-m04:/home/docker/cp-test_ha-429000_ha-429000-m04.txt: (15.4548345s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000 "sudo cat /home/docker/cp-test.txt": (8.8114166s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m04 "sudo cat /home/docker/cp-test_ha-429000_ha-429000-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m04 "sudo cat /home/docker/cp-test_ha-429000_ha-429000-m04.txt": (8.8668743s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 cp testdata\cp-test.txt ha-429000-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 cp testdata\cp-test.txt ha-429000-m02:/home/docker/cp-test.txt: (8.8311581s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m02 "sudo cat /home/docker/cp-test.txt": (8.7785071s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile602855653\001\cp-test_ha-429000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile602855653\001\cp-test_ha-429000-m02.txt: (8.7621055s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m02 "sudo cat /home/docker/cp-test.txt": (8.9351545s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m02:/home/docker/cp-test.txt ha-429000:/home/docker/cp-test_ha-429000-m02_ha-429000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m02:/home/docker/cp-test.txt ha-429000:/home/docker/cp-test_ha-429000-m02_ha-429000.txt: (15.5216491s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m02 "sudo cat /home/docker/cp-test.txt": (8.8659606s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000 "sudo cat /home/docker/cp-test_ha-429000-m02_ha-429000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000 "sudo cat /home/docker/cp-test_ha-429000-m02_ha-429000.txt": (8.9292486s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m02:/home/docker/cp-test.txt ha-429000-m03:/home/docker/cp-test_ha-429000-m02_ha-429000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m02:/home/docker/cp-test.txt ha-429000-m03:/home/docker/cp-test_ha-429000-m02_ha-429000-m03.txt: (15.3941943s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m02 "sudo cat /home/docker/cp-test.txt": (8.8210967s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m03 "sudo cat /home/docker/cp-test_ha-429000-m02_ha-429000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m03 "sudo cat /home/docker/cp-test_ha-429000-m02_ha-429000-m03.txt": (8.8286641s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m02:/home/docker/cp-test.txt ha-429000-m04:/home/docker/cp-test_ha-429000-m02_ha-429000-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m02:/home/docker/cp-test.txt ha-429000-m04:/home/docker/cp-test_ha-429000-m02_ha-429000-m04.txt: (15.3523807s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m02 "sudo cat /home/docker/cp-test.txt": (8.8037183s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m04 "sudo cat /home/docker/cp-test_ha-429000-m02_ha-429000-m04.txt"
E0203 11:24:48.931017    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m04 "sudo cat /home/docker/cp-test_ha-429000-m02_ha-429000-m04.txt": (8.8014993s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 cp testdata\cp-test.txt ha-429000-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 cp testdata\cp-test.txt ha-429000-m03:/home/docker/cp-test.txt: (8.8467028s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m03 "sudo cat /home/docker/cp-test.txt": (8.8927135s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile602855653\001\cp-test_ha-429000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile602855653\001\cp-test_ha-429000-m03.txt: (8.8113429s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m03 "sudo cat /home/docker/cp-test.txt"
E0203 11:25:25.102148    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m03 "sudo cat /home/docker/cp-test.txt": (8.7326035s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m03:/home/docker/cp-test.txt ha-429000:/home/docker/cp-test_ha-429000-m03_ha-429000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m03:/home/docker/cp-test.txt ha-429000:/home/docker/cp-test_ha-429000-m03_ha-429000.txt: (15.4964005s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m03 "sudo cat /home/docker/cp-test.txt": (8.7964859s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000 "sudo cat /home/docker/cp-test_ha-429000-m03_ha-429000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000 "sudo cat /home/docker/cp-test_ha-429000-m03_ha-429000.txt": (8.8639096s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m03:/home/docker/cp-test.txt ha-429000-m02:/home/docker/cp-test_ha-429000-m03_ha-429000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m03:/home/docker/cp-test.txt ha-429000-m02:/home/docker/cp-test_ha-429000-m03_ha-429000-m02.txt: (15.3086s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m03 "sudo cat /home/docker/cp-test.txt": (8.6943236s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m02 "sudo cat /home/docker/cp-test_ha-429000-m03_ha-429000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m02 "sudo cat /home/docker/cp-test_ha-429000-m03_ha-429000-m02.txt": (8.840205s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m03:/home/docker/cp-test.txt ha-429000-m04:/home/docker/cp-test_ha-429000-m03_ha-429000-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m03:/home/docker/cp-test.txt ha-429000-m04:/home/docker/cp-test_ha-429000-m03_ha-429000-m04.txt: (15.3433892s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m03 "sudo cat /home/docker/cp-test.txt": (8.8193868s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m04 "sudo cat /home/docker/cp-test_ha-429000-m03_ha-429000-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m04 "sudo cat /home/docker/cp-test_ha-429000-m03_ha-429000-m04.txt": (8.817229s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 cp testdata\cp-test.txt ha-429000-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 cp testdata\cp-test.txt ha-429000-m04:/home/docker/cp-test.txt: (8.8001002s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m04 "sudo cat /home/docker/cp-test.txt": (8.8363405s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile602855653\001\cp-test_ha-429000-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile602855653\001\cp-test_ha-429000-m04.txt: (8.7518881s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m04 "sudo cat /home/docker/cp-test.txt": (8.7978092s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m04:/home/docker/cp-test.txt ha-429000:/home/docker/cp-test_ha-429000-m04_ha-429000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m04:/home/docker/cp-test.txt ha-429000:/home/docker/cp-test_ha-429000-m04_ha-429000.txt: (15.3271048s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m04 "sudo cat /home/docker/cp-test.txt": (8.7306249s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000 "sudo cat /home/docker/cp-test_ha-429000-m04_ha-429000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000 "sudo cat /home/docker/cp-test_ha-429000-m04_ha-429000.txt": (8.7709029s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m04:/home/docker/cp-test.txt ha-429000-m02:/home/docker/cp-test_ha-429000-m04_ha-429000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m04:/home/docker/cp-test.txt ha-429000-m02:/home/docker/cp-test_ha-429000-m04_ha-429000-m02.txt: (15.3910615s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m04 "sudo cat /home/docker/cp-test.txt": (8.8749498s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m02 "sudo cat /home/docker/cp-test_ha-429000-m04_ha-429000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m02 "sudo cat /home/docker/cp-test_ha-429000-m04_ha-429000-m02.txt": (8.812437s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m04:/home/docker/cp-test.txt ha-429000-m03:/home/docker/cp-test_ha-429000-m04_ha-429000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 cp ha-429000-m04:/home/docker/cp-test.txt ha-429000-m03:/home/docker/cp-test_ha-429000-m04_ha-429000-m03.txt: (15.3835865s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m04 "sudo cat /home/docker/cp-test.txt": (8.8180162s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m03 "sudo cat /home/docker/cp-test_ha-429000-m04_ha-429000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 ssh -n ha-429000-m03 "sudo cat /home/docker/cp-test_ha-429000-m04_ha-429000-m03.txt": (8.7475935s)
--- PASS: TestMultiControlPlane/serial/CopyFile (581.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (69.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 node stop m02 -v=7 --alsologtostderr
E0203 11:29:48.935693    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p ha-429000 node stop m02 -v=7 --alsologtostderr: (34.2215936s)
ha_test.go:371: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr
E0203 11:30:25.104899    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:371: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-429000 status -v=7 --alsologtostderr: exit status 7 (35.6255423s)

                                                
                                                
-- stdout --
	ha-429000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-429000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-429000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-429000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 11:29:59.727768    8060 out.go:345] Setting OutFile to fd 1168 ...
	I0203 11:29:59.777778    8060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:29:59.777778    8060 out.go:358] Setting ErrFile to fd 1072...
	I0203 11:29:59.777778    8060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 11:29:59.789764    8060 out.go:352] Setting JSON to false
	I0203 11:29:59.789764    8060 mustload.go:65] Loading cluster: ha-429000
	I0203 11:29:59.789764    8060 notify.go:220] Checking for updates...
	I0203 11:29:59.790778    8060 config.go:182] Loaded profile config "ha-429000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 11:29:59.790778    8060 status.go:174] checking status of ha-429000 ...
	I0203 11:29:59.791777    8060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:30:01.849522    8060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:30:01.849640    8060 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:30:01.849640    8060 status.go:371] ha-429000 host status = "Running" (err=<nil>)
	I0203 11:30:01.849640    8060 host.go:66] Checking if "ha-429000" exists ...
	I0203 11:30:01.875350    8060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:30:04.001045    8060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:30:04.001045    8060 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:30:04.001221    8060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:30:06.549560    8060 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:30:06.550027    8060 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:30:06.550027    8060 host.go:66] Checking if "ha-429000" exists ...
	I0203 11:30:06.559564    8060 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 11:30:06.559564    8060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000 ).state
	I0203 11:30:08.550785    8060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:30:08.550785    8060 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:30:08.550857    8060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000 ).networkadapters[0]).ipaddresses[0]
	I0203 11:30:11.041759    8060 main.go:141] libmachine: [stdout =====>] : 172.25.12.47
	
	I0203 11:30:11.041759    8060 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:30:11.042387    8060 sshutil.go:53] new ssh client: &{IP:172.25.12.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000\id_rsa Username:docker}
	I0203 11:30:11.143493    8060 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.5838775s)
	I0203 11:30:11.153500    8060 ssh_runner.go:195] Run: systemctl --version
	I0203 11:30:11.170273    8060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:30:11.196876    8060 kubeconfig.go:125] found "ha-429000" server: "https://172.25.15.254:8443"
	I0203 11:30:11.196958    8060 api_server.go:166] Checking apiserver status ...
	I0203 11:30:11.206114    8060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:30:11.244332    8060 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2150/cgroup
	W0203 11:30:11.262885    8060 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2150/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0203 11:30:11.271892    8060 ssh_runner.go:195] Run: ls
	I0203 11:30:11.279371    8060 api_server.go:253] Checking apiserver healthz at https://172.25.15.254:8443/healthz ...
	I0203 11:30:11.286886    8060 api_server.go:279] https://172.25.15.254:8443/healthz returned 200:
	ok
	I0203 11:30:11.286886    8060 status.go:463] ha-429000 apiserver status = Running (err=<nil>)
	I0203 11:30:11.286886    8060 status.go:176] ha-429000 status: &{Name:ha-429000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0203 11:30:11.286886    8060 status.go:174] checking status of ha-429000-m02 ...
	I0203 11:30:11.287576    8060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m02 ).state
	I0203 11:30:13.309587    8060 main.go:141] libmachine: [stdout =====>] : Off
	
	I0203 11:30:13.310613    8060 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:30:13.310613    8060 status.go:371] ha-429000-m02 host status = "Stopped" (err=<nil>)
	I0203 11:30:13.310663    8060 status.go:384] host is not running, skipping remaining checks
	I0203 11:30:13.310663    8060 status.go:176] ha-429000-m02 status: &{Name:ha-429000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0203 11:30:13.310663    8060 status.go:174] checking status of ha-429000-m03 ...
	I0203 11:30:13.311288    8060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:30:15.305237    8060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:30:15.305955    8060 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:30:15.306033    8060 status.go:371] ha-429000-m03 host status = "Running" (err=<nil>)
	I0203 11:30:15.306033    8060 host.go:66] Checking if "ha-429000-m03" exists ...
	I0203 11:30:15.306721    8060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:30:17.307098    8060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:30:17.307098    8060 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:30:17.307194    8060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:30:19.699662    8060 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:30:19.699704    8060 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:30:19.699704    8060 host.go:66] Checking if "ha-429000-m03" exists ...
	I0203 11:30:19.708111    8060 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 11:30:19.708111    8060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m03 ).state
	I0203 11:30:21.714534    8060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:30:21.714534    8060 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:30:21.714609    8060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m03 ).networkadapters[0]).ipaddresses[0]
	I0203 11:30:24.057394    8060 main.go:141] libmachine: [stdout =====>] : 172.25.0.10
	
	I0203 11:30:24.057465    8060 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:30:24.057465    8060 sshutil.go:53] new ssh client: &{IP:172.25.0.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m03\id_rsa Username:docker}
	I0203 11:30:24.161639    8060 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.453478s)
	I0203 11:30:24.169735    8060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:30:24.195775    8060 kubeconfig.go:125] found "ha-429000" server: "https://172.25.15.254:8443"
	I0203 11:30:24.195775    8060 api_server.go:166] Checking apiserver status ...
	I0203 11:30:24.204752    8060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 11:30:24.238453    8060 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2247/cgroup
	W0203 11:30:24.260461    8060 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2247/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0203 11:30:24.269598    8060 ssh_runner.go:195] Run: ls
	I0203 11:30:24.277065    8060 api_server.go:253] Checking apiserver healthz at https://172.25.15.254:8443/healthz ...
	I0203 11:30:24.288112    8060 api_server.go:279] https://172.25.15.254:8443/healthz returned 200:
	ok
	I0203 11:30:24.288211    8060 status.go:463] ha-429000-m03 apiserver status = Running (err=<nil>)
	I0203 11:30:24.288211    8060 status.go:176] ha-429000-m03 status: &{Name:ha-429000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0203 11:30:24.288292    8060 status.go:174] checking status of ha-429000-m04 ...
	I0203 11:30:24.288935    8060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m04 ).state
	I0203 11:30:26.270550    8060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:30:26.270609    8060 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:30:26.270609    8060 status.go:371] ha-429000-m04 host status = "Running" (err=<nil>)
	I0203 11:30:26.270609    8060 host.go:66] Checking if "ha-429000-m04" exists ...
	I0203 11:30:26.271329    8060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m04 ).state
	I0203 11:30:28.285530    8060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:30:28.286024    8060 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:30:28.286125    8060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m04 ).networkadapters[0]).ipaddresses[0]
	I0203 11:30:30.688095    8060 main.go:141] libmachine: [stdout =====>] : 172.25.10.184
	
	I0203 11:30:30.688095    8060 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:30:30.688278    8060 host.go:66] Checking if "ha-429000-m04" exists ...
	I0203 11:30:30.696261    8060 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 11:30:30.696261    8060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-429000-m04 ).state
	I0203 11:30:32.685160    8060 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 11:30:32.685938    8060 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:30:32.685938    8060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-429000-m04 ).networkadapters[0]).ipaddresses[0]
	I0203 11:30:35.060299    8060 main.go:141] libmachine: [stdout =====>] : 172.25.10.184
	
	I0203 11:30:35.060955    8060 main.go:141] libmachine: [stderr =====>] : 
	I0203 11:30:35.061322    8060 sshutil.go:53] new ssh client: &{IP:172.25.10.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-429000-m04\id_rsa Username:docker}
	I0203 11:30:35.163322    8060 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.4670111s)
	I0203 11:30:35.171300    8060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 11:30:35.202182    8060 status.go:176] ha-429000-m04 status: &{Name:ha-429000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (69.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (35.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (35.1290393s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (35.13s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (184.61s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-727100 --driver=hyperv
E0203 11:38:28.187469    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-727100 --driver=hyperv: (3m4.6097231s)
--- PASS: TestImageBuild/serial/Setup (184.61s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.71s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-727100
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-727100: (9.7120407s)
--- PASS: TestImageBuild/serial/NormalBuild (9.71s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (8.17s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-727100
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-727100: (8.1651671s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (8.17s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.53s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-727100
E0203 11:39:48.942422    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-727100: (7.5311929s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.53s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.63s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-727100
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-727100: (7.6321804s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.63s)

                                                
                                    
x
+
TestJSONOutput/start/Command (190.17s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-435300 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-435300 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m10.1656329s)
--- PASS: TestJSONOutput/start/Command (190.17s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.57s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-435300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-435300 --output=json --user=testUser: (7.5709296s)
--- PASS: TestJSONOutput/pause/Command (7.57s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (8.8s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-435300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-435300 --output=json --user=testUser: (8.8001155s)
--- PASS: TestJSONOutput/unpause/Command (8.80s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (33.64s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-435300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-435300 --output=json --user=testUser: (33.6443678s)
--- PASS: TestJSONOutput/stop/Command (33.64s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.04s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.84s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-072200 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-072200 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (235.1253ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9027a25a-1aaf-47f3-af3a-d1ab34aa8f1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-072200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"10e86a03-8b26-4f82-8afd-26e2baef100f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube5\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"bce42775-c6c9-41f6-aeaf-b15e2d8796e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6f248aae-8166-4d59-a34c-1d7e41b8914a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"1317e2eb-50c1-4d67-89f4-92e14b2389e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20354"}}
	{"specversion":"1.0","id":"a973cc34-ba27-42dd-bae7-5f6b27cd4818","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3d8b53f1-5a1d-46e9-90b4-7775b651b420","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-072200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-072200
--- PASS: TestErrorJSONOutput (0.84s)

                                                
                                    
x
+
TestMainNoArgs (0.23s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.23s)

                                                
                                    
x
+
TestMinikubeProfile (501.68s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-205000 --driver=hyperv
E0203 11:45:25.115587    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-205000 --driver=hyperv: (3m5.14532s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-097700 --driver=hyperv
E0203 11:49:48.948891    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:50:25.118495    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:51:12.034060    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-097700 --driver=hyperv: (3m6.0566289s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-205000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (22.2792688s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-097700
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (22.3999691s)
helpers_test.go:175: Cleaning up "second-097700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-097700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-097700: (45.9836602s)
helpers_test.go:175: Cleaning up "first-205000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-205000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-205000: (39.2498622s)
--- PASS: TestMinikubeProfile (501.68s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (146.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-261800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0203 11:54:48.953055    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:55:08.201594    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 11:55:25.122428    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-261800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m25.1752832s)
--- PASS: TestMountStart/serial/StartWithMountFirst (146.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (8.87s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-261800 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-261800 ssh -- ls /minikube-host: (8.8688164s)
--- PASS: TestMountStart/serial/VerifyMountFirst (8.87s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (145.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-261800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-261800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m24.2400985s)
--- PASS: TestMountStart/serial/StartWithMountSecond (145.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (8.84s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-261800 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-261800 ssh -- ls /minikube-host: (8.8360257s)
--- PASS: TestMountStart/serial/VerifyMountSecond (8.84s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (26.5s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-261800 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-261800 --alsologtostderr -v=5: (26.5003787s)
--- PASS: TestMountStart/serial/DeleteFirst (26.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (8.86s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-261800 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-261800 ssh -- ls /minikube-host: (8.8614499s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (8.86s)

                                                
                                    
x
+
TestMountStart/serial/Stop (26.35s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-261800
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-261800: (26.3440713s)
--- PASS: TestMountStart/serial/Stop (26.35s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (112.15s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-261800
E0203 11:59:48.954781    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 12:00:25.125998    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-261800: (1m51.1509616s)
--- PASS: TestMountStart/serial/RestartStopped (112.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (8.79s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-261800 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-261800 ssh -- ls /minikube-host: (8.7861251s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (8.79s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (409.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-749300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0203 12:04:48.958495    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 12:05:25.129422    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 12:07:52.048559    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-749300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m27.5329223s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 status --alsologtostderr: (22.0041026s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (409.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-749300 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-749300 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-749300 -- rollout status deployment/busybox: (3.0194998s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-749300 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-749300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-749300 -- exec busybox-58667487b6-c66bf -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-749300 -- exec busybox-58667487b6-c66bf -- nslookup kubernetes.io: (1.8825076s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-749300 -- exec busybox-58667487b6-zgvmd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-749300 -- exec busybox-58667487b6-c66bf -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-749300 -- exec busybox-58667487b6-zgvmd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-749300 -- exec busybox-58667487b6-c66bf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-749300 -- exec busybox-58667487b6-zgvmd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.38s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (222.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-749300 -v 3 --alsologtostderr
E0203 12:10:25.132177    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 12:11:48.214893    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-749300 -v 3 --alsologtostderr: (3m8.9970508s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 status --alsologtostderr: (33.0586637s)
--- PASS: TestMultiNode/serial/AddNode (222.06s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-749300 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (33.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (33.3506758s)
--- PASS: TestMultiNode/serial/ProfileList (33.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (332.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 status --output json --alsologtostderr: (32.8603279s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 cp testdata\cp-test.txt multinode-749300:/home/docker/cp-test.txt
E0203 12:14:48.965503    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 cp testdata\cp-test.txt multinode-749300:/home/docker/cp-test.txt: (8.7075286s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300 "sudo cat /home/docker/cp-test.txt": (8.7251267s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 cp multinode-749300:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile4218837707\001\cp-test_multinode-749300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 cp multinode-749300:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile4218837707\001\cp-test_multinode-749300.txt: (8.7265841s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300 "sudo cat /home/docker/cp-test.txt": (8.5685098s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 cp multinode-749300:/home/docker/cp-test.txt multinode-749300-m02:/home/docker/cp-test_multinode-749300_multinode-749300-m02.txt
E0203 12:15:25.135897    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 cp multinode-749300:/home/docker/cp-test.txt multinode-749300-m02:/home/docker/cp-test_multinode-749300_multinode-749300-m02.txt: (15.0366133s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300 "sudo cat /home/docker/cp-test.txt": (8.6888202s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m02 "sudo cat /home/docker/cp-test_multinode-749300_multinode-749300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m02 "sudo cat /home/docker/cp-test_multinode-749300_multinode-749300-m02.txt": (8.7386856s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 cp multinode-749300:/home/docker/cp-test.txt multinode-749300-m03:/home/docker/cp-test_multinode-749300_multinode-749300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 cp multinode-749300:/home/docker/cp-test.txt multinode-749300-m03:/home/docker/cp-test_multinode-749300_multinode-749300-m03.txt: (15.1008205s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300 "sudo cat /home/docker/cp-test.txt": (8.728415s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m03 "sudo cat /home/docker/cp-test_multinode-749300_multinode-749300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m03 "sudo cat /home/docker/cp-test_multinode-749300_multinode-749300-m03.txt": (8.7403811s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 cp testdata\cp-test.txt multinode-749300-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 cp testdata\cp-test.txt multinode-749300-m02:/home/docker/cp-test.txt: (8.6874656s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m02 "sudo cat /home/docker/cp-test.txt": (8.6685167s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 cp multinode-749300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile4218837707\001\cp-test_multinode-749300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 cp multinode-749300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile4218837707\001\cp-test_multinode-749300-m02.txt: (8.6737038s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m02 "sudo cat /home/docker/cp-test.txt": (8.6797187s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 cp multinode-749300-m02:/home/docker/cp-test.txt multinode-749300:/home/docker/cp-test_multinode-749300-m02_multinode-749300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 cp multinode-749300-m02:/home/docker/cp-test.txt multinode-749300:/home/docker/cp-test_multinode-749300-m02_multinode-749300.txt: (15.2264793s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m02 "sudo cat /home/docker/cp-test.txt": (8.7293088s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300 "sudo cat /home/docker/cp-test_multinode-749300-m02_multinode-749300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300 "sudo cat /home/docker/cp-test_multinode-749300-m02_multinode-749300.txt": (8.7114458s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 cp multinode-749300-m02:/home/docker/cp-test.txt multinode-749300-m03:/home/docker/cp-test_multinode-749300-m02_multinode-749300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 cp multinode-749300-m02:/home/docker/cp-test.txt multinode-749300-m03:/home/docker/cp-test_multinode-749300-m02_multinode-749300-m03.txt: (15.1106703s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m02 "sudo cat /home/docker/cp-test.txt": (8.6399684s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m03 "sudo cat /home/docker/cp-test_multinode-749300-m02_multinode-749300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m03 "sudo cat /home/docker/cp-test_multinode-749300-m02_multinode-749300-m03.txt": (8.7174935s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 cp testdata\cp-test.txt multinode-749300-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 cp testdata\cp-test.txt multinode-749300-m03:/home/docker/cp-test.txt: (8.7216115s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m03 "sudo cat /home/docker/cp-test.txt": (8.6624332s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 cp multinode-749300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile4218837707\001\cp-test_multinode-749300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 cp multinode-749300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile4218837707\001\cp-test_multinode-749300-m03.txt: (8.5812837s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m03 "sudo cat /home/docker/cp-test.txt": (8.7104003s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 cp multinode-749300-m03:/home/docker/cp-test.txt multinode-749300:/home/docker/cp-test_multinode-749300-m03_multinode-749300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 cp multinode-749300-m03:/home/docker/cp-test.txt multinode-749300:/home/docker/cp-test_multinode-749300-m03_multinode-749300.txt: (14.9955168s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m03 "sudo cat /home/docker/cp-test.txt": (8.6763706s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300 "sudo cat /home/docker/cp-test_multinode-749300-m03_multinode-749300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300 "sudo cat /home/docker/cp-test_multinode-749300-m03_multinode-749300.txt": (8.6521654s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 cp multinode-749300-m03:/home/docker/cp-test.txt multinode-749300-m02:/home/docker/cp-test_multinode-749300-m03_multinode-749300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 cp multinode-749300-m03:/home/docker/cp-test.txt multinode-749300-m02:/home/docker/cp-test_multinode-749300-m03_multinode-749300-m02.txt: (15.1207458s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m03 "sudo cat /home/docker/cp-test.txt": (8.7583888s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m02 "sudo cat /home/docker/cp-test_multinode-749300-m03_multinode-749300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 ssh -n multinode-749300-m02 "sudo cat /home/docker/cp-test_multinode-749300-m03_multinode-749300-m02.txt": (8.655838s)
--- PASS: TestMultiNode/serial/CopyFile (332.02s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (70.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 node stop m03
E0203 12:19:48.969293    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 node stop m03: (23.1806435s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 status
E0203 12:20:25.139019    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-749300 status: exit status 7 (23.8317355s)

                                                
                                                
-- stdout --
	multinode-749300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-749300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-749300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-749300 status --alsologtostderr: exit status 7 (23.6803929s)

                                                
                                                
-- stdout --
	multinode-749300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-749300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-749300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 12:20:27.522787   12872 out.go:345] Setting OutFile to fd 1664 ...
	I0203 12:20:27.578710   12872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 12:20:27.578710   12872 out.go:358] Setting ErrFile to fd 1608...
	I0203 12:20:27.578710   12872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 12:20:27.591398   12872 out.go:352] Setting JSON to false
	I0203 12:20:27.591398   12872 mustload.go:65] Loading cluster: multinode-749300
	I0203 12:20:27.591398   12872 notify.go:220] Checking for updates...
	I0203 12:20:27.592405   12872 config.go:182] Loaded profile config "multinode-749300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 12:20:27.592405   12872 status.go:174] checking status of multinode-749300 ...
	I0203 12:20:27.593503   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:20:29.596880   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:20:29.596978   12872 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:20:29.596978   12872 status.go:371] multinode-749300 host status = "Running" (err=<nil>)
	I0203 12:20:29.596978   12872 host.go:66] Checking if "multinode-749300" exists ...
	I0203 12:20:29.597664   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:20:31.577916   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:20:31.577916   12872 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:20:31.578008   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:20:33.926041   12872 main.go:141] libmachine: [stdout =====>] : 172.25.1.53
	
	I0203 12:20:33.926041   12872 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:20:33.926538   12872 host.go:66] Checking if "multinode-749300" exists ...
	I0203 12:20:33.936266   12872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 12:20:33.936266   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300 ).state
	I0203 12:20:35.878868   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:20:35.878868   12872 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:20:35.878962   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300 ).networkadapters[0]).ipaddresses[0]
	I0203 12:20:38.193708   12872 main.go:141] libmachine: [stdout =====>] : 172.25.1.53
	
	I0203 12:20:38.194742   12872 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:20:38.195282   12872 sshutil.go:53] new ssh client: &{IP:172.25.1.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300\id_rsa Username:docker}
	I0203 12:20:38.297899   12872 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.3615842s)
	I0203 12:20:38.305858   12872 ssh_runner.go:195] Run: systemctl --version
	I0203 12:20:38.327650   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 12:20:38.354185   12872 kubeconfig.go:125] found "multinode-749300" server: "https://172.25.1.53:8443"
	I0203 12:20:38.354254   12872 api_server.go:166] Checking apiserver status ...
	I0203 12:20:38.362863   12872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 12:20:38.394164   12872 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2102/cgroup
	W0203 12:20:38.412846   12872 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2102/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0203 12:20:38.420821   12872 ssh_runner.go:195] Run: ls
	I0203 12:20:38.427838   12872 api_server.go:253] Checking apiserver healthz at https://172.25.1.53:8443/healthz ...
	I0203 12:20:38.436789   12872 api_server.go:279] https://172.25.1.53:8443/healthz returned 200:
	ok
	I0203 12:20:38.436874   12872 status.go:463] multinode-749300 apiserver status = Running (err=<nil>)
	I0203 12:20:38.436932   12872 status.go:176] multinode-749300 status: &{Name:multinode-749300 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0203 12:20:38.436932   12872 status.go:174] checking status of multinode-749300-m02 ...
	I0203 12:20:38.437519   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:20:40.375567   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:20:40.375646   12872 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:20:40.375646   12872 status.go:371] multinode-749300-m02 host status = "Running" (err=<nil>)
	I0203 12:20:40.375646   12872 host.go:66] Checking if "multinode-749300-m02" exists ...
	I0203 12:20:40.376381   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:20:42.319789   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:20:42.319858   12872 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:20:42.319926   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:20:44.678379   12872 main.go:141] libmachine: [stdout =====>] : 172.25.8.35
	
	I0203 12:20:44.678379   12872 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:20:44.678379   12872 host.go:66] Checking if "multinode-749300-m02" exists ...
	I0203 12:20:44.687188   12872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 12:20:44.687188   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m02 ).state
	I0203 12:20:46.670091   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0203 12:20:46.670614   12872 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:20:46.670614   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-749300-m02 ).networkadapters[0]).ipaddresses[0]
	I0203 12:20:49.006503   12872 main.go:141] libmachine: [stdout =====>] : 172.25.8.35
	
	I0203 12:20:49.006549   12872 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:20:49.006549   12872 sshutil.go:53] new ssh client: &{IP:172.25.8.35 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-749300-m02\id_rsa Username:docker}
	I0203 12:20:49.110547   12872 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.423309s)
	I0203 12:20:49.118789   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 12:20:49.143471   12872 status.go:176] multinode-749300-m02 status: &{Name:multinode-749300-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0203 12:20:49.143471   12872 status.go:174] checking status of multinode-749300-m03 ...
	I0203 12:20:49.144389   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-749300-m03 ).state
	I0203 12:20:51.062106   12872 main.go:141] libmachine: [stdout =====>] : Off
	
	I0203 12:20:51.062106   12872 main.go:141] libmachine: [stderr =====>] : 
	I0203 12:20:51.062106   12872 status.go:371] multinode-749300-m03 host status = "Stopped" (err=<nil>)
	I0203 12:20:51.062106   12872 status.go:384] host is not running, skipping remaining checks
	I0203 12:20:51.062106   12872 status.go:176] multinode-749300-m03 status: &{Name:multinode-749300-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (70.70s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (176.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 node start m03 -v=7 --alsologtostderr: (2m24.0418677s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-749300 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-749300 status -v=7 --alsologtostderr: (32.5191385s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (176.73s)

                                                
                                    
x
+
TestPreload (472.53s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-418200 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0203 12:35:25.149460    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-418200 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (3m39.5823334s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-418200 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-418200 image pull gcr.io/k8s-minikube/busybox: (8.0413782s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-418200
E0203 12:39:48.982027    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-418200: (38.9026935s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-418200 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0203 12:40:25.152520    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 12:41:12.074863    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-418200 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m38.1932511s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-418200 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-418200 image list: (6.7027664s)
helpers_test.go:175: Cleaning up "test-preload-418200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-418200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-418200: (41.1047s)
--- PASS: TestPreload (472.53s)

                                                
                                    
x
+
TestScheduledStopWindows (312.12s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-781000 --memory=2048 --driver=hyperv
E0203 12:44:48.985737    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 12:45:08.243373    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 12:45:25.156034    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-781000 --memory=2048 --driver=hyperv: (3m3.1781175s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-781000 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-781000 --schedule 5m: (9.8516786s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-781000 -n scheduled-stop-781000
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-781000 -n scheduled-stop-781000: exit status 1 (10.0113071s)
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-781000 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-781000 -- sudo systemctl show minikube-scheduled-stop --no-page: (8.8605779s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-781000 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-781000 --schedule 5s: (9.6646088s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-781000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-781000: exit status 7 (2.23747s)

                                                
                                                
-- stdout --
	scheduled-stop-781000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-781000 -n scheduled-stop-781000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-781000 -n scheduled-stop-781000: exit status 7 (2.1820843s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-781000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-781000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-781000: (26.1291994s)
--- PASS: TestScheduledStopWindows (312.12s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (892.12s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.2329058181.exe start -p running-upgrade-698100 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.2329058181.exe start -p running-upgrade-698100 --memory=2200 --vm-driver=hyperv: (7m2.8743127s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-698100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0203 13:01:48.257425    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-698100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m43.2977043s)
helpers_test.go:175: Cleaning up "running-upgrade-698100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-698100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-698100: (1m5.2991605s)
--- PASS: TestRunningBinaryUpgrade (892.12s)

                                                
                                    
x
+
TestKubernetesUpgrade (1256.84s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-447600 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
E0203 12:49:48.988856    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 12:50:25.159805    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-447600 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (7m55.683459s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-447600
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-447600: (38.380257s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-447600 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-447600 status --format={{.Host}}: exit status 7 (2.2527721s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-447600 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=hyperv
E0203 12:57:52.088610    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-447600 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=hyperv: (7m2.1100331s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-447600 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-447600 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-447600 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (245.9343ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-447600] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20354
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-447600
	    minikube start -p kubernetes-upgrade-447600 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4476002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-447600 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-447600 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=hyperv
E0203 13:04:48.998521    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 13:05:25.169884    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-447600 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=hyperv: (4m33.2110335s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-447600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-447600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-447600: (44.8074026s)
--- PASS: TestKubernetesUpgrade (1256.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-426800 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-426800 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (300.8545ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-426800] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20354
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (804.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.1795324760.exe start -p stopped-upgrade-130000 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.1795324760.exe start -p stopped-upgrade-130000 --memory=2200 --vm-driver=hyperv: (6m12.9471311s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.1795324760.exe -p stopped-upgrade-130000 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.1795324760.exe -p stopped-upgrade-130000 stop: (34.1155324s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-130000 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0203 12:59:48.995205    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-826100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0203 13:00:25.166221    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-130000 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m37.0149612s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (804.08s)

                                                
                                    
x
+
TestPause/serial/Start (472.68s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-411900 --memory=2048 --install-addons=false --wait=all --driver=hyperv
E0203 12:55:25.162723    5452 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-266500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-411900 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (7m52.6819138s)
--- PASS: TestPause/serial/Start (472.68s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (299.4s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-411900 --alsologtostderr -v=1 --driver=hyperv
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-411900 --alsologtostderr -v=1 --driver=hyperv: (4m59.3696559s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (299.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (9.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-130000
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-130000: (9.1215317s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (9.12s)

                                                
                                    
x
+
TestPause/serial/Pause (8.07s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-411900 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-411900 --alsologtostderr -v=5: (8.0719427s)
--- PASS: TestPause/serial/Pause (8.07s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (12.72s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-411900 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-411900 --output=json --layout=cluster: exit status 2 (12.7150001s)

                                                
                                                
-- stdout --
	{"Name":"pause-411900","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-411900","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (12.72s)

                                                
                                    
x
+
TestPause/serial/Unpause (8.14s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-411900 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-411900 --alsologtostderr -v=5: (8.1435379s)
--- PASS: TestPause/serial/Unpause (8.14s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (8.73s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-411900 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-411900 --alsologtostderr -v=5: (8.733684s)
--- PASS: TestPause/serial/PauseAgain (8.73s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (47.46s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-411900 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-411900 --alsologtostderr -v=5: (47.4564262s)
--- PASS: TestPause/serial/DeletePaused (47.46s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (13.88s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (13.8761013s)
--- PASS: TestPause/serial/VerifyDeletedResources (13.88s)

                                                
                                    

Test skip (33/213)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-266500 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:927: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-266500 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 9628: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-266500 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:991: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-266500 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0113203s)

                                                
                                                
-- stdout --
	* [functional-266500] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20354
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 10:56:29.830868   11288 out.go:345] Setting OutFile to fd 1568 ...
	I0203 10:56:29.883869   11288 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 10:56:29.883869   11288 out.go:358] Setting ErrFile to fd 1712...
	I0203 10:56:29.883869   11288 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 10:56:29.905068   11288 out.go:352] Setting JSON to false
	I0203 10:56:29.907375   11288 start.go:129] hostinfo: {"hostname":"minikube5","uptime":164791,"bootTime":1738415398,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5371 Build 19045.5371","kernelVersion":"10.0.19045.5371 Build 19045.5371","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0203 10:56:29.907375   11288 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0203 10:56:29.913376   11288 out.go:177] * [functional-266500] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	I0203 10:56:29.915239   11288 notify.go:220] Checking for updates...
	I0203 10:56:29.917238   11288 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 10:56:29.921239   11288 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 10:56:29.923240   11288 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0203 10:56:29.927159   11288 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 10:56:29.928777   11288 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 10:56:29.932760   11288 config.go:182] Loaded profile config "functional-266500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 10:56:29.932760   11288 driver.go:394] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:997: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.01s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-266500 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-266500 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0295513s)

                                                
                                                
-- stdout --
	* [functional-266500] minikube v1.35.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20354
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 10:56:24.817380    8368 out.go:345] Setting OutFile to fd 1736 ...
	I0203 10:56:24.871391    8368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 10:56:24.871391    8368 out.go:358] Setting ErrFile to fd 1672...
	I0203 10:56:24.871391    8368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0203 10:56:24.891929    8368 out.go:352] Setting JSON to false
	I0203 10:56:24.895344    8368 start.go:129] hostinfo: {"hostname":"minikube5","uptime":164786,"bootTime":1738415398,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5371 Build 19045.5371","kernelVersion":"10.0.19045.5371 Build 19045.5371","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0203 10:56:24.895344    8368 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0203 10:56:24.899884    8368 out.go:177] * [functional-266500] minikube v1.35.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	I0203 10:56:24.902983    8368 notify.go:220] Checking for updates...
	I0203 10:56:24.905163    8368 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0203 10:56:24.908108    8368 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 10:56:24.910656    8368 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0203 10:56:24.913168    8368 out.go:177]   - MINIKUBE_LOCATION=20354
	I0203 10:56:24.915122    8368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 10:56:24.917337    8368 config.go:182] Loaded profile config "functional-266500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0203 10:56:24.918367    8368 driver.go:394] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1042: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard